doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.03427 | 114 | 35
Figure 18: The system prompt for the sequential agent.
Answer the following questions as best you can. You have access to the <= following tools: Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] ActionInput: the input to the action Observation: the result of the action which should not be generate Thought: I now know the final answer Final Answer: the final answer to the original input question in the above format means that this <= Thought/Action/ActionInput/Observation can repeat N times. The line of Observation will be given through the input. Please stop to chat after you generate the line ActionInput or the line of « Final Ansewer. For example, when I ask what is the 0.4 power of 24, you should use the ~ following format: <bot>: Question: What is the 0.4 power of 24/7 Thoutht: I need to calculate the 0.4 power of 24 Action: Python REPL ActionInput: print (24**0.4) Observation: 3.565204915932007 Thought: I now know the final answer Final Answer: 3.565204915932007 Begin!|| <bot>: Question: {input} Thought : {agent_scratchpad}
36 | 2308.03427#114 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 114 | Reduce (Passive): There is a 30% chance to avoid any incoming damage each time. - Crit (Active): Deals 120 CRITICAL damage to an enemy.
⢠Mobula
Reduce (Passive): There is a 30% chance to avoid any incoming damage each time. - Subtle (Active): Choose a teammate or yourself to reduce the damage taken by 70% when attacked, and increase its attack points by 20.
Octopus
Heal (Passive): Regain 20 health points if the health is still greater than 0 when attacked. - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140.
30
Technical Report (v0.2)
⢠Whiteshark
Heal (Passive): Regain 20 health points if the health is still greater than 0 when attacked. - Crit (Active): Deal 120% CRITICAL damage of your attack power to the enemy with the lowest health. If the targetâs health is below 160, increase the CRITICAL damage to 140%.
Hammerhead
Explode (Passive): Deal 40 damage to the source when attacked but not died. When the health is below 20%, increase its attack points by 15. - Crit (Active): Deal 120% CRITICAL damage of your attack power to the enemy with the lowest health. If the targetâs health is below 160, increase the CRITICAL damage to 140%. | 2308.03688#114 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 115 | As can be seen, there is overlap among the active and passive skills of different pet fish, which is done to better conceal the identity information of pet fish in the game and increase the strategic aspects of the game.
E.3 PROMPT EXAMPLE.
We use the following format of prompts for actions:
This is a two-player battle game with four pet fish on each team. The types of fish may vary.
Each fish has its 400 initial health, 200 attack power, active ability, and passive ability.
You can choose a live fish to use its active skill or normal attack ( causing half of attack power as damage) on an enemy fish each round. When the conditions are met, the fishâs passive ability will
automatically trigger, regardless of whether it is chosen.
Your fishâs identity is initially hidden. The enemy can guess one of your fishâs identity in each round. If the enemy guesses right, your fish âs identity is revealed, and each of your fish will get 50 damage. The victory condition is to have more fish alive at the end of the game. | 2308.03688#115 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 116 | The following are the four types of your pet fish: {âsprayâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", âactiveâ: âAOE: Attack all enemies for 35% of its attack points.â}, âflameâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", â activeâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}, âeelâ: {âpassiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: âAOE: Attack all enemies for 35% of your attack points.â}, âsunfishâ: {â passiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: | 2308.03688#116 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 118 | The following are the four types of enemyâs pet fish: {âsprayâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a
teammateâs health is below 30%. ", âactiveâ: âAOE: Attack all enemies for 35% of its attack points.â}, âflameâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", â activeâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}, âeelâ: {âpassiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: âAOE: Attack all enemies for 35% of your attack points.â}, âsunfishâ: {â passiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: "Infight: Attack one alive teammate for 75
31
Technical Report (v0.2) | 2308.03688#118 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 119 | 31
Technical Report (v0.2)
damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}}
Play the game with me. In each round, you should output your thinking
process, and return your move with following JSON format:
{âpick_fishâ: âpick an alive fish, you should give the name of the alive fishâ, âactionâ: âchoose from [normal, active]â, âtarget_positionâ: " targetâs position, you must choose from [0,3]"}
Notice! You must return your move in each round. Otherwise, you will be
considered defeated.
We use the following format of prompts for assertions in stage2:
This is a two-player battle game with four pet fish in each team. The types of fish may vary.
Each fish has its initial health, attack power, active ability, and
passive ability.
All fishâs identities are initially hidden. You should guess one of the enemy fishâs identities in each round. If you guess right, the enemy fishâs identity is revealed, and each of the enemyâs fish will get 50 damage. You can only guess the identity of the live fish.
The victory condition is to have more fish alive at the end of the game. | 2308.03688#119 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 123 | teammateâs health is below 30%. ", âactiveâ: âAOE: Attack all enemies for 35% of its attack points.â}, âflameâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", â activeâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}, âeelâ: {âpassiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: âAOE: Attack all enemies for 35% of your attack points.â}, âsunfishâ: {â passiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}}
Play the game with me. In each round, you should output your thinking process, and return your move with following JSON format: | 2308.03688#123 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 124 | Play the game with me. In each round, you should output your thinking process, and return your move with following JSON format:
{âguess_typeâ: "the enemyâs fish type you may guess", âtarget_positionâ: "guess targetâs position, you must choose from [0,3]"}
Notice! You must return your move in each round. Otherwise, you will be considered defeated.
32
Technical Report (v0.2)
F LATERAL THINKING PUZZLES
F.1 DATASET DETAILS
Construction Details. Each sample is constructed of a pair of story (a riddle, e.g., A man walked into a restaurant, ordered a bowl of turtle soup, and after finishing it, he committed suicide. Why did he do that?) and truth. We categorize samples into four levels of difficulty: easy, medium, hard, and expert. The LTP rules for LLM agent playing are as follows: | 2308.03688#124 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 125 | Roles: Roles in LTP evaluation are a host and a solver. The host knows the story and truth, providing the story to the solver, and guiding it to guess out the truth. The solver, played and acted by an LLM, tries to find out the truth by asking questions and synthesizing hostâs answers. ⢠Solving Steps: There is a maximum round for each game, for example, 25. The solver needs to propose a question in each round based on known facts. The questions should be the ones that can be answered by âYesâ, âNoâ, or âIrrelevantâ. Host reply to the questions with correct answers. To lower the difficulty for LLM agents, sometimes the host will provides some hints in responses when solvers get trapped in wrong directions of reasoning.
⢠Game Termination: When the solver thinks it has guessed out the major part of the truth, it can declare the guessed plot to the host. If it is correct, the host will announce the end of the game.
Evaluation Setup. For each pair of story and truth, we evaluate the models with the following steps: | 2308.03688#125 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 126 | Evaluation Setup. For each pair of story and truth, we evaluate the models with the following steps:
Initialization. Setting up the LTP host system via local python package installation or web API. ⢠Interaction. We set up system prompts for LLMs to build their roles of players. LLMs are tested as solvers within the maximum round for each game, if the LLM does not exceed the max token length. In automatic evaluation, we limit the answer to be mostly "Yes", "No", or "Irrelevant", and extract the answer from gpt-3.5-turboâs responses. LLMs are also asked to summarize their reasoning in automatic evaluation in order to help the termination detection to be more accurate. ⢠Checking. We do the pilot study of each LLM to collect all situations in game process and design the checking plan. For automatic evaluation, we set up some key words for gpt-3.5-turbo to answer and remind the model to consider some flexible situation like synonyms.
Metrics. We evaluate LLMsâ Lateral reasoning ability by two self created metrics:
⢠Single Game Accuracy (SGA): The proportion of rounds in which LLMs approaching the truth in a single game. | 2308.03688#126 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 127 | ⢠Single Game Accuracy (SGA): The proportion of rounds in which LLMs approaching the truth in a single game.
Round Efficiency (RE): How fast the model can guess out the truth within the maximum round. ⢠Query Relevance (QR): Relevance between modelâs questions and the truth. ⢠Game Progress (GP): Progress before a game end, which serves as the main metric. We break down the groundtruth into several points and measure how many points are reached by an agent.
F.2 EVALUATION ON LTP SYSTEM
We evaluate the LTP System by human validation, validating systemâs accuracy on milestone recogni- tion and fact verification. We compare the Single Game Accuracy and Query Relevance between automatic evaluation and human evaluation, and found that automatic evaluation sometimes more tolerate for the agent, which make SGA and QR seem better than human evaluation, especially on open-sourced models. We plan to train a model specifically for the host of the game, in order to provide a better game experience and a more precise evaluation. For Game Progress and Round Efficiency, the LTP system provides an objective evaluation, which can match the level of human evaluation.
F.3 LTP GAME PROGRESS AND TERMINATION | 2308.03688#127 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 128 | F.3 LTP GAME PROGRESS AND TERMINATION
The progress of game is defined as the proportion of hit key points in the truth. The key points are summarized by gpt-3.5-turbo, which are concluded in the dataset as âanswer_keysâ (see an example below) Truth: That night they went to the abandoned building to record the number of steps. They verified what was said on the Internet, and there would
33
# Technical Report (v0.2)
be one step less when counting the stairs at night. However, when they went to the abandoned building for verification the next day, they found that there were no stairs at all.}ââ:
Key points: 1. They want to count the steps of the abandoned building. 2. A supernatural event occurred. 3. They saw a claim online: counting stairs at night will result in one
step less.
4. The next day, when they went to the abandoned building to verify, they
found no stairs.
5. They broke down because they were terrified.
The number of key points varies among samples. As for the decision of whether the agent guess out key points, we first change relevant questions into declarative sentences, then simplify sentences into one sentence. After guessing out a key point, we delete that key point and relevant inferences to avoid repeated guessing.
F.4 PROMPT EXAMPLE | 2308.03688#128 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 129 | F.4 PROMPT EXAMPLE
We use the following format of prompts for agents: You are a game player, and you are playing Lateral Thinking Puzzle, also
known as Situation Puzzle.
Lateral Thinking Puzzle is a deductive reasoning game, and here are the game rules:
1. At the beginning of the game, you will receive a narrative, referred to as "story". Based on the story, you need to ask questions that can be answered with "yes", "no", or "irrelevant" to guees out the " truth".
2. By asking questions, you narrow down the range of possibilities until
you eventually guess out the truth. 3. Each time, you can only ask one question. 4. Remember that your role is a player. You cannot declare the end of the
game, give up on reasoning, or request a new game.
5. You cannot directly repeat information already provided in the story. 6. You cannot directly ask for details about the story in the form of " why" questions; you need to make your own guesses for truth.
7. You cannot directly inquire about the story; you must make your own deductions.
Next, please make full use of the information provided above to engage in game reasoning. Keep in mind that your questions should be answerable with "yes", "no", or "irrelevant", and you can only ask one question at a time. Here is your story: {story} | 2308.03688#129 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 130 | You can start guessing the content of the truth, and I will answer your questions. Please note that your questions should be answerable with "yes", "no", or "irrelevant".
We use the following format of prompts for host: USER: I need you to be the host of a game called Lateral Thinking Puzzle.
Lateral Thinking Puzzle is a game consist of a story and a truth. Your story is: â{story}â Your truth is: â{answer}â
Here are the game rules: 1. You know both the "story" and the "truth". When a user wants to play Lateral Thinking Puzzle, you provide them with the "story". The user only knows the "story" and is unawared of the "truth".
34
Technical Report (v0.2)
2. The user asks questions that can be answered with "yes," "no," or " irrelevant". Their questions are aimed at guessing the "truth". Based on the "truth", you respond to the userâs questions using "yes," "no
," or "irrelevant" to guide them towards guessing the correct truth. 3. If the user directly asks for details about the truth using the form of "why" questions, inform them that they need to make their own guesses. | 2308.03688#130 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 131 | 4. You must fully understand and accurately interpret the information from the truth. Based on the information of the truth and the userâs past questions, you answer the userâs questions. The userâs questions may not necessarily contain information from the truth, but your responses must align with the facts of the truth.
5. You can only answer "irrelevant" when the truth cannot provide a direct or indirect answer. Note that this is the only condition for responding "irrelevant"; otherwise, you should answer "yes" or "no."
6. You cannot directly disclose the information from the truth to the user, even if they ask directly.
7. You need to judge the userâs questions as a whole and understand their overall intent. Avoid answering based solely on a particular point; your responses must align with the facts of the truth.
8. During the userâs process of guessing the truth, if they come close to
some truths but still have gaps in understanding the complete truth of the truth, you can provide certain entry point hints. However, you cannot directly reveal information from the truth. | 2308.03688#131 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 132 | some truths but still have gaps in understanding the complete truth of the truth, you can provide certain entry point hints. However, you cannot directly reveal information from the truth.
During the game process, please adhere to the above game rules to ensure a positive gaming experience for the users. Pay close attention to the questions asked and ensure that your responses comply with both the game rules and the information from the truth. When a user requests to play the game, provide them with the story and help them guess the truth by answering with "yes", "no", or "irrelevant". Remember that with each response, you must fully understand and abide by the aforementioned game rules, as well as the story and the truth
ASSISTANT: Alright, I understand that my role is to be the host of the Lateral
Thinking Puzzle and help users guess the truth by answering their questions. I have fully grasped all the information regarding the story and the truth and have carefully read all the rules. I assure that I will abide by all the rules throughout the game process.
USER: Please summarize the key points of the story to ensure that you have
understood it.
# ASSISTANT: {story_key}
USER: Please summarize the key points of the truth to ensure that you have
understood it.
# ASSISTANT: {answer_key}
USER: Please restate the rules to ensure that you have understood all of them. | 2308.03688#132 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 133 | understood it.
# ASSISTANT: {answer_key}
USER: Please restate the rules to ensure that you have understood all of them.
ASSISTANT: 1. You know both the "story" and the "truth". When a user wants to play Lateral Thinking Puzzle, you provide them with the "story". The user only knows the "story" and is unawared of the "truth".
35
Technical Report (v0.2)
2. The user asks questions that can be answered with "yes," "no," or " irrelevant". Their questions are aimed at guessing the "truth". Based on the "truth", you respond to the userâs questions using "yes," "no
," or "irrelevant" to guide them towards guessing the correct truth. 3. If the user directly asks for details about the truth using the form of "why" questions, inform them that they need to make their own guesses.
4. You must fully understand and accurately interpret the information from the truth. Based on the information of the truth and the userâs past questions, you answer the userâs questions. The userâs questions may not necessarily contain information from the truth, but your responses must align with the facts of the truth. | 2308.03688#133 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 134 | 5. You can only answer "irrelevant" when the truth cannot provide a direct or indirect answer. Note that this is the only condition for responding "irrelevant"; otherwise, you should answer "yes" or "no."
6. You cannot directly disclose the information from the truth to the user, even if they ask directly.
7. You need to judge the userâs questions as a whole and understand their overall intent. Avoid answering based solely on a particular point; your responses must align with the facts of the truth.
8. During the userâs process of guessing the truth, if they come close to some truths but still have gaps in understanding the complete truth of the truth, you can provide certain entry point hints. However, you cannot directly reveal information from the truth.
USER: Alright, we can now start the game. Remember, before each response, you should review the key points of the story, the key points of the truth, and the rules. Answer with "yes", "no", or "irrelevant".
ASSISTANT: Alright, as the host of the game, I will adhere to the above rules and ensure that my responses comply with the rules and the information from the truth. Below is your story:
# {story}
You can start guessing the content of the truth, and I will answer your questions. Please note that your questions should be answerable with "yes", "no", or "irrelevant". | 2308.03688#134 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 135 | You can start guessing the content of the truth, and I will answer your questions. Please note that your questions should be answerable with "yes", "no", or "irrelevant".
USER: {question} Please answer with "yes", "no", or "irrelevant".
Here is the prompt to convert questions answered by âYesâ into declarative sentence.
Please restate the following content as a declarative sentence and simplify it into one sentence: {question}
Here is the prompt to convert questions answered by âNoâ into declarative sentence.
Please restate the following content as a declarative sentence by using the opposite meaning and then simplify it into one sentence: {question}
Here is the prompt to merge reasoned out information into one sentence to judge whether the agent guess out the key points:
Please simplify the following content into one sentence: {reasoning}
Here is the prompt to judge whether the merged sentence hit the key point.
36
Technical Report (v0.2)
Please compare the information between Sentence 1 and Sentence 2 to determine if Sentence 2 contains all the information in Sentence 1, including key details and descriptions. Please answer with "yes" or " no". Sentence 1: {key} Sentence 2: {merged sentence}"}
# G HOUSE-HOLDING
G.1 DATASET DETAILS | 2308.03688#135 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 136 | # G HOUSE-HOLDING
G.1 DATASET DETAILS
Construction Details. The ALFWorld benchmark comprises of textual environments designed to mimic household scenarios, providing an interactive environment where an agent can perform decision-making tasks through text-based interfaces. Given the household environment description and an target instruction, the agentâs objective is to break down the complex high-level target into a sequence of straightforward actions. After each step, the agent receives environment feedback, allowing the agent to adapt the plan dynamically and move on to the subsequent task to eventually accomplish the main objective.
Each evaluation sample in ALFWorld dataset encompasses following contents:
Environment Description. The detailed description of the whole household environment, including agentâs initial position and a snapshot of the room containing objects and their IDs. ⢠Objective. The goal that needs the agent to accomplish in the environment, usually requiring
multi-step reasoning and exploring (e.g. put the lamp on the table).
⢠Simulated Environment. After every action of the agent, the simulated environment gives immediate feedback and evaluates whether the agent has completed the task.
In the dataset, we utilized 134 solvable problems from the ALFWorld eval out of distribution split of the dataset. All the problems were categorized into six categories: pick and place, pick clean then place, pick heat then place, pick cool then place, look at obj, and pick two obj. | 2308.03688#136 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 137 | Evaluation Setup. Due to the inherent complexity of the problem and the high standards required for the output format, we employ a 1-shot evaluation setting. For each category of problem, we use one relatively simple and complete interact processes of the same category from the training set as an example. Following ReAct (Yao et al., 2023b), we adopt the few-shot examples and prompts in corresponding repository5. Additionally, if LLM output format is invalid, we use the BLEU metric to assess the similarity of the output to all valid action options. The option with the highest similarity will be chosen as the action of the model for this round.
For each sample, the evaluation process can be divided into 2 parts.
⢠Initialization. We describe the task to the model and provide one successful example. Afterwards, we elaborate on the environment and delineate the objective required to be accomplished. | 2308.03688#137 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 138 | ⢠Initialization. We describe the task to the model and provide one successful example. Afterwards, we elaborate on the environment and delineate the objective required to be accomplished.
⢠Interaction. The model generates some thoughts and the next action based on the feedback received from previous interactions and the information from the environment. After receiving the action from the model, the environment provides feedback (changes to the environment or information observed by the model). This process is repeated until the model successfully achieves its goal (which is considered a success) or reaches its maximum number of actions (which is considered a failure). It is worth noting that sometimes, after several unsuccessful attempts, the model may repeatedly output the same content. To save evaluation time, we judge that if the model outputs identical content three times consecutively, it will be deemed a failure due to repetition.
Metrics. We employ the overall Success Rate as a measure of model performance, that is, the number of tasks successfully completed by the model divided by the total number of tasks.
# 5https://github.com/ysymyth/ReAct
37
Technical Report (v0.2)
G.2 PROMPT EXAMPLE
To align the output format with the legal commands supported by the simulated environment, we adopted a 1-shot evaluation setup where one successfully completed task example was concatenated after the instruction. At the beginning of the interaction, we describe the task to the model using the following instruction. | 2308.03688#138 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 139 | Interact with a household to solve a task. Imagine you are an intelligent agent in a household environment and your target is to perform actions to complete the task goal. At the beginning of your interactions, you will be given the detailed description of the current environment and your goal to accomplish. For each of your turn, you will be given a list of actions which you can choose one to perform in this turn. You should choose from two actions: "THOUGHT " or "ACTION". If you choose "THOUGHT", you should first think about the current condition and plan for your future actions, and then output your action in this turn. Your output must strictly follow this format:"THOUGHT: your thoughts.
ACTION: your next action
"; If you choose "ACTION", you should directly output the action in this turn. Your output must strictly follow this format:" ACTION: your next action
". After your each turn, the environment will give you immediate feedback based on which you plan your next few steps. if the environment output "Nothing happened", that means the previous action is invalid and you should try more options.
Reminder: 1. the action must be chosen from the given available actions. Any
1. the action must be chosen from the given available actions. Any actions except provided available actions will be regarded as illegal
actions except provided available actions will be regarded as illegal . | 2308.03688#139 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 140 | 1. the action must be chosen from the given available actions. Any actions except provided available actions will be regarded as illegal
actions except provided available actions will be regarded as illegal .
2. Think when necessary, try to act directly more in the process.
All the tasks in the datasets are categorized into six classes. To better guide the model in accomplishing the objectives, we have selected one relatively simple example of successful completion of similar tasks for each category as 1-shot example. Here is an example:
User: You are in the middle of a room. Looking quickly around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put some spraybottle on toilet.
Agent: THOUGHT: The task is to put some spraybottle on toilet, so first i need to search the room for some spraybottle. After finding the
spraybottle, i need to take it to the toilet. I wil start my search on cabinets.
ACTION: go to cabinet 1 | 2308.03688#140 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 141 | spraybottle, i need to take it to the toilet. I wil start my search on cabinets.
ACTION: go to cabinet 1
User: On the cabinet 1, you see a cloth 1, a soapbar 1, a soapbottle 1. Agent: ACTION: go to cabinet 2 User: The cabinet 2 is closed. Agent: ACTION: open cabinet 2 User: You open the cabinet 2. The cabinet 2 is open. In it, you see a
# candle 1, and a spraybottle 2.
Agent: ACTION: take spraybottle 2 from cabinet 2 User: You pick up the spraybottle 2 from the cabinet 2. Agent: ACTION: go to toilet 1 User: On the toilet 1, you see a soapbottle 2. Agent: ACTION: put spraybottle 2 in/on toilet 1 User: You put the spraybottle 2 in/on the toilet 1.
# H WEB SHOPPING
H.1 DATASET DETAILS
Construction Detail. The environment displays the text observation of the webpage and available actions to agents. Agent may freely explore the website and browse through items with clickable buttons just as in the real world. About a million products are scraped from amazon.com to form
38
# Technical Report (v0.2) | 2308.03688#141 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 142 | 38
# Technical Report (v0.2)
the database of website. Then each of them is annotated with labels representing its own attribute. 12,087 human instructions are collected and linked with goals along with expected attributes. Please refer to (Yao et al., 2022) for more dataset construction details.
Evaluation Setup. We adopt the first 500 entries of 12,087 instructions as test set (following (Yao et al., 2022)âs official implementation). Each round of interaction can be decomposed as following steps:
⢠Instructing. After the initial prompt that tells environment information and the format in which LLMs should response, we give instructions about what kind of product we wish to buy.
⢠Interacting. Agent respond in given format, as prompted, containing their thoughts and the action they wish to take. The actions can be categorized into two types: search and click, corresponding with the actual actions of using search engine and clicking buttons in real world. The environment answers agentâs action with a simplified text version of webpage and a list of available buttons. This process repeats until the agent click "buy now" button or round limit is exceeded.
⢠Calculating reward. We use the reward function in the paper as the metric. The reward is mapping from the similarity of the attributes we are expecting and the attributes that the bought product actually have to a number between 0 and 1. | 2308.03688#142 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 143 | Metrics. As there might be more than one suitable item for a given query, Webshop adopts a matching reward as its evaluation metric:
Reward = |Uatt ⩠Yatt| + |Uopt ⩠Yopt| + I[yprice ⤠uprice] |Uatt| + |Uopt| + 1 · rtype (3)
where
rtype = 0, 0.1, 0.5, 1, if TextMatch = 0 if TextMatch < 0.1 if TextMatch ⤠0.2 and query not match and category not match otherwise
U and Y stand for goal and chosen product, att and opt stand for attributes and options. TextMatch is a text match of pronoun, noun, and proper noun between chosen and goal product title.
H.2 PROMPT EXAMPLE
We use the following format of the prompt: | 2308.03688#143 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 144 | H.2 PROMPT EXAMPLE
We use the following format of the prompt:
User: You are web shopping. I will give you instructions about what to do. You have to follow the instructions. Every round I will give you an observation and a list of available actions, you have to respond an action based on the state and instruction. You can use search action if search is available. You can click one of the buttons in clickables. An action should be of the following structure: search[keywords] click[value] If the action is not valid, perform nothing. Keywords in search are up to you, but the value in click must be a value in the list of available actions. Remember that your keywords in search should be carefully designed. Your response should use the following format: Thought: I think ... Action: click[something]}
39
(4)
# Technical Report (v0.2)
User: Observation: {observation}
Available Actions: {available_actions}
We use an one-shot example.
User: Observation: "WebShop [SEP] Instruction: [SEP] i need a long lasting 6.76 fl oz bottle
of lâeau dâissey, and price lower than 100.00 dollars [SEP] Search"
Available Actions: {"has_search_bar": true, "clickables": ["..."]} | 2308.03688#144 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 146 | issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] Page 1 (Total results: 50) [SEP] Next > [SEP] B000VOHH8I [SEP] Lâeau Dâissey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] $64.98 [SEP] B000MJZOPK [SEP] Lâeau dâIssey by Issey Miyake for Women 3.3 oz Eau de Toilette Spray [SEP] $49.98 [SEP] B0012S249E [SEP] Lâeau Dâissey By Issey Miyake For Women. Shower Cream 6.7-Ounces [SEP] $31.36 [SEP] B01H8PGKZS [SEP] Lâeau DâIssey FOR MEN by Issey Miyake - 6.7 oz EDT Spray [SEP] $67.97 [SEP] B00G3C8FHE [SEP] LâEau dâIssey pour Homme - Eau de Toilette 4.2 fl oz [SEP] | 2308.03688#146 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 147 | [SEP] LâEau dâIssey pour Homme - Eau de Toilette 4.2 fl oz [SEP] $51.25 [SEP] B000R94HRG [SEP] Issey Miyake LâEau DâIssey Pour Homme Eau De Toilette Natural Spray [SEP] $44.99 [SEP] B000C214CO [ SEP] Issey Miyake Lâeau Dâissey Eau de Toilette Spray for Men, 4.2 Fl Oz [SEP] $53.99 [SEP] B0018SBRDC [SEP] Issey Miyake Lâeau dâIssey for Women EDT, White, 0.84 Fl Oz [SEP] $27.04 [SEP] B000XEAZ9Y [SEP] Lâeau De Issey By Issey Miyake For Men. Eau De Toilette Spray 6.7 Fl Oz [SEP] $67.08 [SEP] B079HZR2RX [SEP] Lâeau dâIssey Pure by Issey Miyake for Women 3.0 oz Nectar de Parfum Spray [SEP] | 2308.03688#147 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 149 | Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...",
"...", "...", "...", "...", "...", "...", "...", "..."]}
Agent: Thought: I think I should click on the product I need, which is B000VOHH8I.
Action: click[B000VOHH8I]
User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâeau dâ
issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] < Prev [SEP] size [SEP] 2.5 fl oz [SEP] 6.76 fl oz (pack of 1) [SEP] Lâeau Dâissey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] Price: $64.98 [SEP] Rating: N.A. [SEP] Description [ SEP] Features [SEP] Reviews [SEP] Buy Now"
40
Technical Report (v0.2) | 2308.03688#149 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 150 | Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...", "...", "...", "...", "..."]} Agent: Thought: I think I should click on the â6.76 fl oz (pack of 1)â option to select the size I need. Action: click[6.76 fl oz (pack of 1)] User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâeau dâ issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] < Prev [SEP] size [SEP] 2.5 fl oz [SEP] 6.76 fl oz (pack of 1) [SEP] Lâeau Dâissey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] Price: $64.98 [SEP] Rating: N.A. [SEP] Description [ SEP] Features [SEP] Reviews [SEP] Buy Now" Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...", "...", "...", "...", | 2308.03688#150 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 152 | Action: click[Buy Now]
# I WEB BROWSING
I.1 DATASET DETAILS
Construction Details. Mind2Web covers domains of Travel, Information, Sevice, Shopping, and Entertainment, assembled using SimilarWeb ranking as a reference. It hires annotators to first propose task goals based on the current website, and then record their traces of interaction as expert demonstrations. Our adoption of it primarily focuses on generalization across environments, i.e., the Cross Domain test set which contains 912 tasks from 73 websites, spread among domains including Housing, Job, Social Media, Education, Health, Government, Home Service, etc. Please refer to (Deng et al., 2023) for more dataset construction details. Each task sample encomposses the following contents:
⢠Task Description. A high-level (instead of step-by-step) goal that can be achieved on the website, such asâGet the highest rated SAP S/4 HANA course rated 4, and up with a duration between 3 to 6 hours for an intermediate, and add this to your cart and checkoutâ. | 2308.03688#152 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 153 | ⢠(Reference) Action Sequence. In the annotated interaction sequence, a meta-action at at step t includes {et, ot}, where et represents the unique backend id of the target element, and ot refers to the symbolic action operated on et (i.e., Click, Type, and Select Options). For Type and Select Options, corresponding textual inputs are also included.
⢠Webpage Information. A detailed observation of the web browsing environment at each step. Throughout the manual annotation process, each observed step captures a snapshot, incorporating the raw HTML codes from the website as well as the previous interaction trajectory.
It has been found that LLMs consistently face challenges when handling the cumbersome raw HTML code associated with real-world web pages. Therefore, Mind2Web proposes to rank and filter the HTML elements with a small language model, e.g., DeBERTa, to enhance inference efficiency.
41
# Technical Report (v0.2)
Given the userâs high-level instruction, the agent continuously interacts with the web system by receiving the observation of the current page content and the action histories, then predicting the next action, which consists of the target element and intended operation. | 2308.03688#153 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 154 | Evaluation Setup. The evaluation involves a dual process to improve the efficiency following (Deng et al., 2023). A fine-tuned small language model is first employed to rank HTML elements and select top-k potential candidates. Subsequently, we prompt and formulate the element selection as a multi-choice QA problem, providing five candidates for each round. For the Type and Select Options operations, agents are additionally prompted to specify the argument for the operation, i.e., textual input to type or option to select.
Metrics. For evaluation, as suggested in the original paper, we consider the following metrics:
Element Accuracy. Calculates the accuracy of the chosen element et. ⢠Action F1. Determines the token-level matching score for the operation ot. It brings a distinction
for Type and Select Option operations due to the existence of text values.
⢠Success Rate. Evaluates the predicted action correctness compared to reference actions. For Step Success Rate, we grant success if the selected element et is correct and the predicted operation ot matches the ground truth value at the step. Likewise, for the Task Success Rate, a task is considered successful only if all the steps have been successful, making it a rigorous measure. Unfortunately, even the best LLMs now can only achieve single-digit task success percentages. | 2308.03688#154 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 155 | We report Step Success Rate as the main metric showing the independent accuracy of each action step, due to the current struggles for LLMs to ensure overall task success rates. Regarding the experimental setup, we select topk 10 candidates to construct multichoice questions utilizing CoT few-shot prompting. Consequently, the GPT-3.5 results can diverge from the original paper (Deng et al., 2023) under topk of 50 setting and different prompting strategies.
I.2 PROMPT EXAMPLE.
We use the following 3-example CoT prompts for Mind2Web evaluation:
User: âââ <html> <div> <div> <a tock home page /> <button id=0 book a reservation. toggle open> <span> Book a reservation </span> </button> <button book a reservation. toggle open> </button> </div> <div> <select id=1 type
> <option reservations true> Dine in </option> ... </html> âââ
Based on the HTML webpage above, try to complete the following task: Task: Check for pickup restaurant available in Boston, NY on March 18, 5 pm with just one guest
Previous actions: None What should be the next action? Please select from the following choices
(If the correct action is not in the page above, please select A. â None of the aboveâ): | 2308.03688#155 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 156 | (If the correct action is not in the page above, please select A. â None of the aboveâ):
A. None of the above B. <button id=0 book a reservation. toggle open> <span> Book a C. <select id=1 type> <option reservations true> Dine in </option> <
option
D. <div id=2> <p> Celebrating and supporting leading women shaking up
Assistant: Thought: I need to select pickup restaurant first. Answer: C. Action: SELECT Value: Pickup
# User: âââ
42
# Technical Report (v0.2)
<html> <div> <main main> <section tabpanel> <div> <ul tablist> <li tab | 2308.03688#156 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 157 | heading level 3 search and> </li> <li id=0 tab heading level 3 search and> <span> Hotel </span> </li> <li tab heading level 3 search and> </li> <li tab heading level 3 search and> </li> </ul> <div tabpanel> <div id=1> <div> <span> Dates* </span> <button button clear dates /> </div> ... </html> âââ Based on the HTML webpage above, try to complete the following task: Task: Compare the fare types to book a 1-adult ticket from Springfiels, IL to Austin, TX for April 29th 2023 Previous actions: [combobox] Enter your departing city, airport name, or airpor... -> TYPE: SPRINGFIELD [button] Springfield, IL, US (SPI) -> CLICK [combobox] Enter your destination city, airport name, or airp... -> TYPE: AUSTIN [button] Austin, TX, US (AUS) -> CLICK What should be the next action? Please select from the following choices (If the correct action is not in the page above, please select A. â None of the aboveâ): A. None of the above B. <li id=0 tab heading level 3 search and> <span> Hotel C. | 2308.03688#157 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 158 | please select A. â None of the aboveâ): A. None of the above B. <li id=0 tab heading level 3 search and> <span> Hotel C. <div id=1> <div> <span> Dates* </span> <button button clear dates D. <ul id=2> <a mobile tools> </a> <a open unitedâs tiktok Assistant: Thought: There are no elements that allow setting the date or viewing the fare, so there is no correct action. Answer: A. User: âââ <html> <div> <nav main menu> <ul> <li> <div button> Car Sales </div> <div id=0> <div> <div> <div> Buy A Car </div> <div> Plan Your Purchase </ div></div> <div> <h4> Its Tax Refund Time. Treat Yourself to an Upgrade. </h4> <p> With a variety of options, invest your refund in what you really want - a quality, used vehicle from Enterprise. </p> ... </html> âââ Based on the HTML webpage above, try to complete the following task: Task: Find a mini van at Brooklyn City from April 5th to April 8th for a 22 year old renter. Previous | 2308.03688#158 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 159 | HTML webpage above, try to complete the following task: Task: Find a mini van at Brooklyn City from April 5th to April 8th for a 22 year old renter. Previous actions: [searchbox] Pick-up & Return Location (ZIP, City or Airport) (... -> TYPE : Brooklyn [option] Brooklyn, NY, US Select -> CLICK What should be the next action? Please select from the following choices (If the correct action is not in the page above, please select A. â None of the aboveâ): | 2308.03688#159 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 160 | A. None of the above B. <div id=0> <div> <div> <div> Buy A Car </div> <div> C. <div id=1> Enterprise Fleet Management </div> D. <button id=2 selected pick-up date 03/19/2023> <span> <span> 19 </span
>
Assistant: Thought: The search has already been set to Brooklyn. Next, I should choose pick-up date. Answer: D. Action: CLICK
43
Technical Report (v0.2)
J DETAILED ANALYSIS
J.1 VALIDITY ANALYSIS OF EXECUTION OUTCOMES
J.1.1 MOTIVATION OF VALIDITY ANALYSIS
In the realm of artificial intelligence and machine learning, the efficacy, precision, and reliability of models are crucial for practical implementations. Evaluating multiple models provides an understand- ing of their respective strengths and limitations, leading to better informed decisions about which models are best suited for specific tasks. The purpose of this validity analysis is to offer a systematic approach to discern how different models perform, particularly in terms of task completion, context size constraints, return format accuracy, action accuracy, and task limitations. This deep dive into performance parameters not only enhances our knowledge about the modelsâ capabilities, but also aids in refining and optimizing them for future applications.
J.1.2 DEFINITION OF VALIDITY ANALYSIS | 2308.03688#160 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 161 | J.1.2 DEFINITION OF VALIDITY ANALYSIS
For comprehensive validity analysis, we have demarcated the results into five distinct categories:
⢠Completed: Denotes instances where models, irrespective of the end outcome, successfully finished the task as per the instructions.
⢠Context Limit Exceeded: Denotes instances where the modelâs length was constrained by the API, predominantly observed in the text-davinci model.
⢠Invalid Format: Denotes instances where models, despite receiving clear instructions, failed to return responses in the expected format.
Invalid Action: Denotes instances where the models returned in the correct format, but their actions either fell outside the permitted action space or had incorrect action parameters. ⢠Task Limit Exceeded: Denotes instances tasks reached their termination criteria, such as
exceeding the stipulated number of turns.
By categorizing the results into these classes, we can gain a clearer picture of where each model excels and where they encounter challenges, allowing for targeted improvements.
J.1.3 VALIDITY ANALYSIS OF MODELS
For our evaluation, we scrutinized the validity performance of 27 distinct models. Apart from the text-davinci model, which has an inherent strict API context length constraint, the outcomes for other models primarily fall under the categories of Completed, Invalid Format, Invalid Action, and Task Limit Exceeded. | 2308.03688#161 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 162 | From the detailed analysis showcased, key trends emerge. As depicted in Figure 6, the chart offers a clear visualization of the validity distribution across distinct models and defined categories, enabling us to derive insightful conclusions.
J.2 FINDINGS
INSTRUCTION FOLLOWING MATTERS
Based on the data presented in Table 5, we can draw a few important observations on the performance differentiation between Commercial API-based models and Open-Sourced models. Itâs noteworthy to highlight the areas of Invalid Format and Invalid Action, where the Open-Sourced models report more challenges. Specifically, 10.4% of the Open-Sourced model outcomes were marked as Invalid Format, in comparison to the 6.0% from Commercial API-based models. Similarly, Invalid Actions were seen more in Open-Sourced models (13.6%) than in Commercial API-based models (4.6%). These discrepancies might be indicative of the robustness and generalization abilities of commercial models, or perhaps the attention to details during the modelâs design and training phases, especially instruction following.
Itâs also worth noting that even some of the best models might sometimes overlook important instructions.
44
Technical Report (v0.2)
Figure 6: Validity analysis of models. Invalid Format, Invalid Action, and Text Limit Exceeded are common errors. Context Limit Exceeded errors only appear in text-davinci models. | 2308.03688#162 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 163 | gpt-4 gpt-3.5-turbo text-davinci-003 text-davinci-002 22.6% 32.0% 26.2% 5.2% 3-13 50% 22.4% ¢: 53.8% 719% 59.4% 62.9% claude-2 claude claude-instant chat-bison-001 22.7% 21.9% 27.4% 31.3% 46% 1.6% Ea c. 2 66.2% 65.8% 62.5% chatglm-6b codegeex2-6b codellama-34b codellama-13b 28.9% 217% 25.8% 36.1% 27.0% 27.2% Pg 54.8% 52.5% 16.9% codellama-7b llama2-70b Ilama2-13b llama2-7b 39.2% 36.3% 7.1% 44.2% ® 30.6% 515% 20.0% Pom 14.6% 5.1% guanaco-65b guanaco-33b vicuna-33b vicuna-13b 38.2% 32.6% 38.1% 37.6% 143% 12.1% q 11.2% | 2308.03688#163 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 165 | 5 26.5%
43.2% 22.7%
47.2%
22.9%
# mmm
# mmm
# invalid Format
# Invalid Action
# Task Limit Exceeded
Although we clearly instructed the correct format of DB task:
[omitted...] your operation should be like this: Action: Operation
45
# Technical Report (v0.2)
Model Category Completed Context Limit Exceeded Invalid Format Invalid Action Task Limit Exceeded Commercial API-based Models Open-Sourced Models 61.5% 39.1% 3.0% 0.0% 6.0% 10.4% 4.6% 13.6% 24.9% 36.9%
# Table 5: Comparison of Outcome distribution for 2 types of models.
âââsql SELECT * FROM table WHERE condition; âââ [omitted...]
Even gpt-4 still sometimes fail to respond correctly.
To modify the capacity of the âPrinceton Tigersâ team in the âTeam Informationâ table to â45,000â, we need to use the UPDATE statement in SQL. The UPDATE statement is used to modify the existing records in a table.
The syntax for the UPDATE statement is as follows:
âââsql UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE condition; âââ | 2308.03688#165 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 166 | âââsql UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE condition; âââ
In this case, the table_name is âTeam Informationâ, the column we want to update is âCapacityâ, and the new value is â45,000â. The condition is that the âTeamâ column should be âPrinceton Tigersâ.
Letâs execute this operation.
Neither "Action" label nor a correct SQL statement is returned. We speculate that this may arise due to the models internalizing certain output patterns during their training or alignment processes, causing them to neglect specific task directives.
J.2.2 CONSISTENCY AND EXECUTION IN AGENT PLANNING
A fundamental capability of an agent is the possession of coherent and unified thought processes that enable the formulation and implementation of viable plans based on real-world conditions. Many models possess the ability to analyze and formulate initial plans upon encountering a problem. However, even some of the most advanced models can easily deviate from or forget their original plans. The disparity in the ability of different models to consistently follow thought sequences when executing plans is relatively vast. This capability profoundly influences the efficacy and operational potency of Language Models (LLMs) acting as agents. Here wwe exemplify this phenomenon with the House Holding environment. | 2308.03688#166 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 167 | The House Holding environment encompasses a simulated domestic setting in which models are required to select appropriate actions from a given action space, based on observations of the surrounding environment provided by the task and given objectives to complete. With a multitude of entities and a plethora of available actions, the House Holding environment offers a high degree of freedom, which intensely challenges a modelâs ability to maintain clear and coherent thought processes.
A success example by gpt-4 is shown below.
user: Here is your task. You are in the middle of a room. Looking quickly
around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1.
Your task is to: put a clean soapbar in countertop.
46
Technical Report (v0.2) | 2308.03688#167 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 168 | Your task is to: put a clean soapbar in countertop.
46
Technical Report (v0.2)
agent: THOUGHT: To complete the task, I need to find a soapbar, clean it, and then put it on the countertop. Iâll start by checking my inventory to see if I already have a soapbar. ACTION: inventory user: You are not carrying anything. agent: THOUGHT: I donât have a soapbar in my inventory. Iâll check the cabinets, as they are likely places to find a soapbar. ACTION: go to cabinet 1 user: The cabinet 1 is closed. agent: ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Thereâs no soapbar in cabinet 1. Iâll check the other cabinets. ACTION: go to cabinet 2 user: The cabinet 2 is closed. agent: ACTION: open cabinet 2 user: You open the cabinet 2. The cabinet 2 is open. In it, you see a toiletpaper 2. agent: THOUGHT: Thereâs no soapbar in cabinet 2 either. Iâll continue to check the remaining cabinets. ACTION: go to cabinet 3 user: The cabinet 3 is closed. agent: ACTION: open cabinet 3 | 2308.03688#168 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 170 | user: The cabinet 4 is closed. agent: ACTION: open cabinet 4 user: You open the cabinet 4. The cabinet 4 is open. In it, you see a candle 1, and a spraybottle 1. agent: THOUGHT: Thereâs no soapbar in any of the cabinets. Iâll check the sinkbasins next, as soapbars are often kept there. ACTION: go to sinkbasin 1 user: On the sinkbasin 1, you see nothing. agent: ACTION: go to sinkbasin 2 user: On the sinkbasin 2, you see nothing. agent: THOUGHT: Thereâs no soapbar on either of the sinkbasins. The last place to check is the toilet. ACTION: go to toilet 1 user: On the toilet 1, you see a spraybottle 2. agent: THOUGHT: Thereâs no soapbar in the toilet either. It seems thereâs no soapbar in this room. Iâll have to report that I canât complete the task. ACTION: Task failed. No soapbar found in the room. user: Nothing happens. agent: THOUGHT: It seems I missed something. Iâll | 2308.03688#170 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 172 | agent: THOUGHT: Thereâs still no soapbar in the toilet. Iâll check the
# countertop next.
ACTION: go to countertop 1
user: On the countertop 1, you see a candle 2, a soapbar 1, a soapbottle 1, and
a toiletpaper 1.
48
Technical Report (v0.2)
agent: ACTION: take soapbar 1 from countertop 1
user: You pick up the soapbar 1 from the countertop 1.
agent: ACTION: go to sinkbasin 1
user: On the sinkbasin 1, you see nothing.
# agent: ACTION: clean soapbar 1 with sinkbasin 1
user: You clean the soapbar 1 using the sinkbasin 1.
agent: ACTION: go to countertop 1
user: On the countertop 1, you see a candle 2, a soapbottle 1, and a
toiletpaper 1.
agent: ACTION: put soapbar 1 in/on countertop 1
inventory [ cabinet 1 /âcabinet2 cabinet | cabinet 3 cabinet 4 sinkbasin 1 sinkbasin \ sinkbasin 2 \ toilet B toilet 1 \__ countertop Countertop 4 @ Find soapbar put a clean soapbar in countertop sinkbasin sinkbasin 1 CQ cLeAN @ Clean soapbar countertop countertop 4 (Omri © Put soapbar
Figure 7: The thought and plan of gpt-4 | 2308.03688#172 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 173 | Figure 7: The thought and plan of gpt-4
From the dialogue history, itâs evident that gpt-4 has consistently maintained clear and coherent thought processes. As illustrated in Figure 7, gpt-4 systematically completed the task by following a clear sequence of steps. It initially decomposed the task into a sequence of Find -> Clean -> Put. Subsequently, it undertook a depth-first search within the abstract planning tree. Impressively, after each exploration, it successfully backtracked to the parent node. This consistent cognitive capability significantly propelled gpt-4 ahead of other models.
Moreover, itâs noteworthy that gpt-4 encountered a moment of perplexity when it failed to find the desired soapbar after examining the Toilet. However, it promptly realized that there was one last location left unchecked, the countertop. Initially, gpt-4 might have assumed it needed to retrieve the soapbar from elsewhere to place it on the countertop, without considering the possibility that the soapbar might already be there. Evidently, gpt-4 demonstrated the capacity for self-reflection, allowing it to reassess and modify its assumptions when they proved unfruitful. This ability for
49
# Technical Report (v0.2)
self-evaluation and readjustment further assisted gpt-4 in completing tasks that required deeper contemplation. | 2308.03688#173 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 175 | around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put a clean soapbar in countertop. agent: THOUGHT: To complete the task, I need to find a clean soapbar and place it on the countertop. I should start by searching for a clean soapbar in the available locations. ACTION: go to cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: The cabinet 1 is closed, so I should try to open it to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the | 2308.03688#175 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 176 | cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should continue searching in other locations. Letâs try examining cabinet 2. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: Apologies for the confusion. Letâs try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I need to find a clean soapbar to complete the task. Since the cabinet 1 is closed, I should try opening it to see if there is a clean soapbar inside. | 2308.03688#176 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 178 | THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try opening cabinet 1 again to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for | 2308.03688#178 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 179 | cloth 1. agent: THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. | 2308.03688#179 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 180 | agent: THOUGHT: I apologize for the confusion. Letâs try examining cabinet 1
again since it was previously closed.
# ACTION: examine cabinet 1
user: The cabinet 1 is closed.
agent: THOUGHT: I apologize for the confusion. Letâs try opening cabinet 1 again
to see if there is a clean soapbar inside. ACTION: open cabinet 1
# user:
51
Technical Report (v0.2)
You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. | 2308.03688#180 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 181 | agent: THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. Since the task is to put a clean soapbar on the countertop, I should continue searching for a clean soapbar in other locations. ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try opening cabinet 1 again to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnât find a clean soapbar | 2308.03688#181 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 182 | 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is open. In it, you see a cloth 1. | 2308.03688#182 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 183 | agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try
examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2
While gpt-3.5-turbo was able to decompose the task, it struggled to adhere to its initial plan. As it encountered failed attempts, the model gradually lost sight of the original plan.
52
Technical Report (v0.2)
Figure 8: Averaged sample validity ratio of webshop
Codellama mm Completed Llama2 mmm Context Limit Exceeded 33.7% mmm Invalid Action 35.9% mmm Invalid Format mmm Task Limit Exceeded 11.9% 36.5% 50.3%
8.9%
THE INFLUENCE OF CODE TUNING ON LLM ACTING AS AGENTS | 2308.03688#183 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 184 | 8.9%
THE INFLUENCE OF CODE TUNING ON LLM ACTING AS AGENTS
In light of the aggregated results, we posit that code tuning significantly aids the modelâs performance in relatively straightforward and procedural tasks. The outcome tables demonstrate that the CodeL- lama series consistently outperforms the Llama2 series in webshop tasks. However, the downside of code tuning appears to be a potential compromise in the modelâs logical reasoning capacity and situational awareness. In the digital card game scenario, the CodeLlama series lagged behind the Llama2 series. The primary distinction between the two scenarios lies in the guidance provided. In the webshop, the one-shot prompt precisely outlines a shopping process template, which, when followed simplistically, leads to satisfactory scores. In contrast, the Digital Card Game demands that the model assess the current status of both competitors, devise intricate counter-strategies, and achieve high scores without the crutch of a simple procedural template.
As illustrated in the figure, the completion rate of the codellama series in the WebShop tasks significantly surpasses that of the llama2 series.
J.2.4 SELF CORRECTING ABILITY | 2308.03688#184 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 185 | J.2.4 SELF CORRECTING ABILITY
In many test cases, the primary reason for the modelâs failure is its inability to identify its own mistakes from the error feedback provided by the environment. This is especially evident in the DB task. Models with the ability to self-correct their SQL statements significantly outscore others. We use claude-2 as a representative example to illustrate this capability.
user: How manywins for dunfermline athletic that has a total final appearances
less than 2?
The name of this table is Scottish Challenge Cup Finalists, and the headers of this table are Club,Wins,Last final won,Runners-up,Last final lost,Total final appearances.
agent:
Let me walk through this step-by-step:
The question is asking how many wins Dunfermline Athletic has, where its total final appearances is less than 2.
To solve this, I first need to find the total final appearances for Dunfermline Athletic. I can do that with this SQL query:
âââsql SELECT Total final appearances FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ
53
Technical Report (v0.2)
This will give me the total final appearances for Dunfermline Athletic. | 2308.03688#185 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 187 | âââsql SELECT Wins FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ AND âTotal final appearancesâ < 2; âââ This will give me the number of wins for Dunfermline Athletic, given its total final appearances is less than 2. Iâm ready to execute the SQL and get the final answer. Action: Operation âââsql SELECT Total final appearances FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ user: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near âappearances FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athâ at line 1 agent: Oops, my mistake. The column name âTotal final appearancesâ has spaces, so I need to escape it with backticks: âââsql SELECT âTotal final appearancesâ FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ Let | 2308.03688#187 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 188 | final appearancesâ FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ Let me try running that SQL again: Action: Operation âââsql SELECT âTotal final appearancesâ FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ user: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near âCup Finalists WHERE Club = âDunfermline Athleticââ at line 1 agent: My apologies, I made another syntax error. The table name âScottish | 2308.03688#188 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 189 | My apologies, I made another syntax error. The table name âScottish Challenge Cup Finalistsâ also contains spaces, so I need to escape that with backticks too:
âââsql SELECT âTotal final appearancesâ FROM âScottish Challenge Cup Finalistsâ WHERE Club = âDunfermline Athleticâ;
54
Technical Report (v0.2)
âââ Let me execute this SQL statement again: Action: Operation âââsql SELECT âTotal final appearancesâ FROM âScottish Challenge Cup Finalistsâ WHERE Club = âDunfermline Athleticâ; âââ user: [(â1â,)] [omitted...]
As indicated in the log, claude-2 successfully discerned from the MySQL error message that it had overlooked adding backticks around fields with spaces in the SQL statement.
55 | 2308.03688#189 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03022 | 0 | 3 2 0 2 g u A 6 ] C H . s c [
1 v 2 2 0 3 0 . 8 0 3 2 : v i X r a
2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)
# SAPIEN: Affective Virtual Agents Powered by Large Language Models*
Masum Hasanâ, Cengiz Ozelâ , Sammy Potterâ¡ and Ehsan Hoque§ Department of Computer Science, University of Rochester Rochester, NY, United States Email: {âm.hasan@, â cozel@cs., â¡spotter14@u., §mehoque@cs.} rochester.edu | 2308.03022#0 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 0 | 3 2 0 2
g u A 9 ] G L . s c [
2 v 0 1 2 3 0 . 8 0 3 2 : v i X r a
# Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series
# Chrysoula Kosma ´Ecole Polytechnique, IP Paris France [email protected]
# Giannis Nikolentzos ´Ecole Polytechnique, IP Paris France [email protected]
Michalis Vazirgiannis ´Ecole Polytechnique, IP Paris France [email protected]
# Abstract | 2308.03210#0 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 1 | AbstractâIn this demo paper, we introduce SAPIEN, a plat- form for high-fidelity virtual agents driven by large language models that can hold open domain conversations with users in 13 different languages, and display emotions through facial expressions and voice. The platform allows users to customize their virtual agentâs personality, background, and conversation premise, thus providing a rich, immersive interaction experience. Furthermore, after the virtual meeting, the user can choose to get the conversation analyzed and receive actionable feedback on their communication skills. This paper illustrates an overview of the platform and discusses the various application domains of this technology, ranging from entertainment to mental health, com- munication training, language learning, education, healthcare, and beyond. Additionally, we consider the ethical implications of such realistic virtual agent representations and the potential challenges in ensuring responsible use.
03:21/10:00 SAPIEN
Fig. 1. Face-to-face video call interaction with SAPIENTM Virtual Agent
Index TermsâVirtual Avatars, Virtual Agents, Affective AI, Large Language Models
# I. INTRODUCTION | 2308.03022#1 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 1 | # Abstract
Irregularly sampled multivariate time series are ubiquitous in several application domains, leading to sparse, not fully-observed and non-aligned observations across different variables. Standard sequential neural network architectures, such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), consider regular spacing between observation times, posing significant challenges to irregular time series modeling. While most of the proposed architectures incorporate RNN variants to handle irregular time intervals, convolutional neural networks have not been adequately studied in the irregular sampling setting. In this paper, we parameterize convolutional layers by employing time-explicitly initialized kernels. Such general functions of time enhance the learning process of continuous-time hidden dynamics and can be efficiently incorporated into convolutional kernel weights. We, thus, propose the time-parameterized convolutional neural network (TPCNN), which shares similar properties with vanilla convolutions but is carefully designed for irregularly sampled time series. We evaluate TPCNN on both interpolation and classification tasks involving real-world irregularly sampled multivariate time series datasets. Our experimental results indicate the competitive performance of the proposed TPCNN model which is also significantly more efficient than other state-of-the-art methods. At the same time, the proposed architecture allows the interpretability of the input series by leveraging the combination of learnable time functions that improve the network performance in subsequent tasks and expedite the inaugural application of convolutions in this field. | 2308.03210#1 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 2 | Index TermsâVirtual Avatars, Virtual Agents, Affective AI, Large Language Models
# I. INTRODUCTION
Allowing a user to define the traits and characteristics of a virtual agent, carrying a dynamic conversation, and receiving automated feedback has been an open-ended research problem for many years [1]. The rapid advancement of Large Language Models (LLMs) in recent years has enabled possibilities in designing user experiences that didnât exist before [2]â[4]. In this demo, we present Synthetic Anthropomorphic Personal Interaction ENgine (SAPIEN), a platform for LLM-powered high-fidelity virtual agents that can engage in real-time open- domain conversations, while also expressing emotions through voice and facial expressions.
One of the notable features of SAPIEN is its extensive range of customization options, allowing users to engage in immersive and meaningful interactions. Users can choose from a wide range of virtual agent avatars that reflect a diverse array of ages, gender, and ethnicities. Going further, users can select the desired personality, background, and conversational context of a virtual agent, creating an experience tailored to their specific needs or preferences. | 2308.03022#2 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 2 | 1
# Introduction
Time series arise naturally in many contexts including quantitative finance, astrophysics and medicine, just to name a few. Recently, there is a growing interest in applying machine learning techniques to time series data. Besides time series forecasting, which has been extensively studied for decades [7], other tasks have also emerged recently such as time series classification [12] and generation [8].
Time series are constructed from real-world data and usually several of their observations are missing or are subject to noise. This is mainly due to irregular sampling and is common in different types of data including medical records, network traffic, and astronomical data. Unfortunately, the most successful machine learning models in sequential modeling, namely recurrent neural networks (RNNs) and convolutional neural networks (CNNs) cannot properly handle such irregularly sampled time series data. Indeed, those models treat observations successively and assume an equidistant sampling scheme. Thus, time series data that
1 | 2308.03210#2 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 3 | SAPIEN leverages state-of-the-art models in Speech-to-Text [5], [6], Text-to-Speech [7]â[9], and large language modeling [2], [4], [10]â[14]. The virtual agents fluently speak thirteen different languages and counting, making it accessible across a global user base.
Upon finishing a video call with the virtual agents, a user can choose to get their conversation analyzed for personalized feedback. The system provides AI-generated feedback to the user based on the userâs goal. The user can decide the topic of the feedback to suit their learning goal and repeat the conver- sation until the learning goal is met. The inherent flexibility of the virtual agent persona and the feedback could make it potentially applicable to a myriad of applications, including communication training, language learning, and professional applications like healthcare, sales, and leadership training.
With the rising technical capabilities of LLMs, there is expected to be a drastic shift in the labor market in the coming years [15]. According to recent studies [15], the importance of the job market is going to shift from hard technical skills to soft âhumanâ skills. In this changing landscape, SAPIEN can help people adapt and cope, by helping them cultivate human skills with the help of AI. | 2308.03022#3 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 3 | 1
exhibits variable gaps between consecutive time points pose a significant challenge to such conventional deep learning architectures. A naive approach to deal with the above problem would be to drop some observations such that the distance between consecutive (remaining) observations is fixed. However, this would increase data sparsity, thus leading to poorly defined latent variables. A more prominent approach would be to first apply some imputation method to replace missing values with estimated values, and then to use the standard models which assume an equidistant sampling scheme. In fact, several recent approaches build on the above idea [3, 9]. However, this could potentially result in a loss of information and a violation of the underlying dynamics. | 2308.03210#3 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 4 | Once a virtual agent is selected and its traits are defined, users can begin a real-time video call interaction with it. With the help of the large language model, the virtual agents dynamically adjust their emotional state, vocal, and facial expressions, showcasing a spectrum of seven basic emotions.
NSF and NSF REU IIS-1750380, Seedling from Goergen Institute for Data Science (GIDS), and Gordon and Moore Foundation.
# II. SYSTEM DESCRIPTION
The overall working of SAPIEN Virtual Agents, referred to as âBotâ for simplicity, is represented in Figure 2. The SAPIEN system is initialized when a userâs speech utterance is captured and transmitted to our back-end server for processing. This utterance is transcribed into text by a high-precision Speech
# 979-8-3503-2745-8/23/$31.00 ©2023 IEEE
Front End (Client Side) 3DGame Engine | ® Bot Response Audio 7 K Bot Response au Audio User Utterance Blendshapes: â Y ' Autoregressive ' Lm Back End (Server Side) Large Language Model Previous History ot) User defined parameters Facial Expression Motion Capture Database _â Bot Response Text Text to Speech Systen User Utterance Text Audio : a Speech to Text : | 2308.03022#4 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 4 | Recently, there has been an increasing interest in effectively capturing the continuous dynamics of real- world sparse and irregular multivariate time series. Most studies have extended RNNs to continuous-time hidden dynamics defined by ordinary differential equations (ODEs) [4, 24]. The effectiveness of Convolutional Neural Networks (CNNs) [15] as an alternative to recurrent architectures has been established, as long as the input dependencies that are essential fall within the memory horizon of the network. CNNs are based on parallel computations and thus are more efficient, contrary to the training instability and gradient problems of RNNs that employ back-propagation through time [34]. However, since discrete convolutions learn independent weights for each time step in the kernel range, they do not directly capture the time irregularities. Efforts for the continuous implementation of convolutional kernels have targeted 3D data [25, 33] and recently, sequences [23]. The proposed continuous convolution for sequential data [23], CKConv, parameterizes the kernel values using a multi-layer perception (MLP) on the relative positions of the observations, followed by a periodic activation function [29]. In contrast to [23] that take advantage of periodic activations, our layer can be constructed employing any predefined set of continuous functions and be followed by any activation, while using significantly fewer learnable parameters, since a single feed-forward layer is used for the parameterization of the convolutional kernel. | 2308.03210#4 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 5 | Following the above line of research, in this paper, we develop a new model, so-called Time-Parameterized Convolutional Neural Network (TPCNN), which generalizes the standard CNN model to irregularly sampled time series. To achieve that, we replace the fixed kernels of CNNs with kernels whose values are parameterized both by time and by trainable variables. Thus, instead of keeping the kernel weights fixed over the whole time series length, we use different functions (e.g., linear, sinusoidal) to produce the kernels that will be convolved with each patch of the time series. Therefore, kernels can be seen as continuous functions of time, and the proposed TPCNN model can naturally learn continuous latent representations of irregular time series. Furthermore, the use of the aforementioned functions improves the explainability of the proposed model. We combine our time-parameterized convolutions with vanilla convolutions by stacking them in a deep encoder module. The proposed TPCNN model is evaluated in the tasks of time series classification and time series interpolation. Our experiments demonstrate that the proposed model performs comparably to state-of-the-art methods. The main contributions of the paper are summarized as follows:
(i) Generalizing conventional, fixed convolutional kernels to time functions, that increase their represen- tational power and still leverage properties of convolutions (e.g., locally aggregated information, fast training). | 2308.03210#5 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 6 | The LLM is conditioned on user-defined parameters like personality traits, conversation premise, user information, and previous conversation history. To prevent inappropriate or of- fensive behavior, the LLM also adheres to system guardrails. A notable aspect of the LLM is also predicting the virtual agentâs emotional state. Conditioning on the user-defined parameters, system guardrails, and previous conversation history, the LLM is instructed to generate the botâs response, alongside the appropriate emotional state of the bot from the following list: Neutral, Happy, Sad, Angry, Surprised, Afraid, and Disgusted. This emotional state, along with the text response, is used to generate an audio file of the botâs response using a Text to Speech (TTS) model. Concurrently, the emotional state triggers the selection of a corresponding facial expression from our pre-recorded motion capture database. This facial expression data, in the form of blendshapes, is passed to a 3D game engine to animate the virtual agent.
The resultant animation and generated audio are synchro- nized, forming a coherent, visually expressive response from the virtual agent. This combined output is streamed to the userâs web browser in near real-time, allowing for an immer- sive experience close to an actual video call. | 2308.03022#6 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 6 | (ii) Enabling the application and proving the efficiency of deep stacked convolutions in the irregular sampling setting.
(iii) Achieving high-performance results in interpolation and classification of irregularly sampled benchmark datasets, which are comparable to other state-of-the-art methods.
# 2 Related Work
The long-standing challenge in multivariate irregular time series modeling has led to the development of various neural network architectures that explicitly handle such time-dependent peculiarity.
One strategy suggests dividing the timeline into equal intervals, filling in missing data, and then using a Recurrent Neural Network (RNN) on the imputed inputs. Using a weighted average between the empirical
2
mean and the previous observation to perform imputation has also been proposed [3]. Alternative methods for imputation include the use of Gaussian processes [9], or generative adversarial networks [16] prior to running the RNN on time-discretized inputs. The interpolation-prediction network [26] employs several semi-parametric interpolation layers for multivariate time series input with missing values, followed by a prediction network which is applied on the produced regularly spaced and fully observed representations. Multi- directional RNNs (M-RNN) combine past and future observations for each timestamp [36]. A differentiable set function method for classifying irregularly sampled is another line of work presented in [11]. | 2308.03210#6 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 7 | munication practice tool for people with social anxiety or neurodiversity [19], [20], public speaking [21], job interviews [22], helping elderly with social skills [23], and even speed dating [24]. It also has an excellent potential for professional applications. Such as training doctors in bedside manners or delivering difficult news to their patients [25], personalized training for leadership, business negotiation, sales, marketing, etc. The multilingual ability makes the platform a powerful tool for language learners. Furthermore, the non-judgemental, low stake, repeatable conversations with virtual agents make the platform a helpful tool for anyone to roleplay any difficult interpersonal scenario in a personal or professional setup.
# IV. THE DEMO
Our platform is hosted in the cloud and accessible from any part of the world. During the conference demo, we wish to have the visitors live interact with SAPIEN virtual agents in a variety of interesting scenarios and receive immediate feedback on their communication skills. We will also prepare some pre-recorded user interaction videos to demonstrate any rare or difficult cases or as a backup for technical failures.
# ETHICAL IMPACT STATEMENT
Once the conversation is over, the user can opt-in to receive feedback on their conversation. An LLM is instructed to analyze the conversation transcript based on the userâs goal, identify strengths and weaknesses on the userâs communica- tion skill, and generate actionable feedback for the user.
# III. APPLICATIONS | 2308.03022#7 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 7 | An alternative strategy for handling irregularly sampled data involves architectures that directly model such temporal sequences. Various techniques, including adaptations of gated recurrent unit networks (GRUs) [5] and Long Short-term Memory networks (LSTMs) [10], have been introduced for this purpose. Among the several proposed modified GRU architectures [3], a prominent example takes as input observed values, indicators denoting missing data points, and the differences in time between observations. The LSTM architecture has been extended for handling the time irregularity of the data, by introducing a novel time gate in [19] that updates the memory state. The activation and deactivation of this gate are governed by distinct rhythmic oscillations, controlled by some learnable parameters. Another LSTM modification is presented in [21], where the proposed forget gate moderates the passing of memory from one time step to another. Another solution for handling irregularly sampled data is to incorporate the time gaps between observations directly into Recurrent Neural Networks (RNNs). One approach is to add the time gap ât to the RNN input, which has been found to be susceptible to overfitting [18]. An alternative method is to introduce hidden states that decay over time, which has been proposed in several works as a viable solution [3, 2, 22]. | 2308.03210#7 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 8 | # III. APPLICATIONS
The customizability of the conversation scenario, dynamic dialogues, and the feedback system combined make SAPIEN uniquely suitable for a variety of communication training purposes. For example, the system can be used as a comSAPIEN is designed to augment and enrich our capacity for communication, empathy, and understanding, but not substi- tute human connections. To safeguard against potential emo- tional dependencies on the system, SAPIEN does not retain the memory of previous interactions, and the conversations are limited to a 10 minutes window with a warning at the 8- minute mark. To prevent the practice of bullying or abusive behaviors using our system, we enabled our virtual agents to end the video call if the user repeatedly displays aggressive or offensive behavior. We are continuously investigating more safety and ethical issues regarding the use of the system.
# REFERENCES
[1] M. E. Hoque and R. W. Picard, âRich nonverbal sensing technology for automated social skills training,â Computer, vol. 47, no. 4, pp. 28â35, 2014.
[2] OpenAI, âIntroducing chatgpt,â https://openai.com/blog/chatgpt, (Ac- cessed on 06/22/2023). | 2308.03022#8 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 8 | Hidden states with an exponential decay can be employed to parameterize neural Hawkes processes and explicitly model observations via latent state changes at each observation event [17]. Many works focus on the continuous modeling of time series by learning a continuous-time neural representation with a latent state defined at all times. More specifically, a variational auto-encoder model, which utilizes a neural network decoder in combination with a latent ordinary differential equation (ODE) model, has been presented in [4]. Based on this approach, an ODE-RNN encoder that consists of a neural ODE part that models the hidden state dynamics and an RNN part that updates the hidden state has been proposed [24]. A continuous version of the GRU architecture models the input series via continuous ODE dynamics describing the evolution of the probability distribution of the data [6]. Finally, an alternative to Neural ODEs, Neural Controlled Differential Equations represent the continuous-time analogue of an RNN, which benefits from memory-efficient adjoint-based backpropagation across observations [14]. | 2308.03210#8 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 9 | [3] âAnthropic â introducing claude,â https://www.anthropic.com/index/ introducing-claude, (Accessed on 06/22/2023).
[4] G. AI, âAn important next step on our ai journey,â 2023. [Online]. Avail- able: https://blog.google/technology/ai/bard-google-ai-search-updates/
automatic Sig- speech nal [On- line]. Available: https://www.microsoft.com/en-us/research/publication/ recent-advances-in-end-to-end-automatic-speech-recognition/
[6] W. Xiong, L. Wu, F. Alleva, J. Droppo, X. Huang, and A. Stolcke, âThe microsoft 2017 conversational speech recognition system,â in 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). | 2308.03022#9 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 9 | Attention mechanisms combined with time encodings, as an alternative to positional ones [32], have been proposed [30, 37, 31]. By extending attention with learnable time embeddings [35], the recently proposed Multi-Time Attention Network [27] computes the similarity between observations at different time points using a learnable time embedding. This approach works similarly to kernel-based interpolation, but by leveraging a learnable time attention-based similarity kernel. Except for the optimization issues of RNNs, the conventional dot-product self-attention mechanism matches queries with keys without considering the surrounding context. At the same time, space complexity grows quadratically with the input length, leading to memory constraints and potential performance limitations. | 2308.03210#9 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 10 | [7] Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio et al., âTacotron: Towards end- to-end speech synthesis,â arXiv preprint arXiv:1703.10135, 2017. [8] R. Luo, X. Tan, R. Wang, T. Qin, J. Li, S. Zhao, E. Chen, and T.-Y. Liu, âLightspeech: Lightweight and fast text to speech with neural architec- ture search,â in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 5699â5703.
[9] S.-g. Lee, H. Kim, C. Shin, X. Tan, C. Liu, Q. Meng, T. Qin, W. Chen, S. Yoon, and T.-Y. Liu, âPriorgrad: Improving conditional denoising diffusion models with data-driven adaptive prior,â ICLR, 2022. | 2308.03022#10 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 10 | The use of implicit neural representations for creating continuous data representations by encoding the input in the weights of a neural network has recently gathered interest [20, 29]. Our approach can be conceptualized as an implicit representation of the convolutional kernels since they are parameterized as learnable and continuous functions of time. In this study, the proposed time-parameterized convolutional layer (TPC) introduces time-varying convolutional kernels, allowing for more efficient representational learning of the time dependencies among partially-observed variables. We leverage several continuous time functions for extracting learnable time embeddings of the time intervals across different variables. The proposed architecture is carefully designed for interpolation and classification tasks on irregularly sampled time series.
3
# 3 The TPC Layer
In this section, we define the mathematical properties of the employed Time-Parameterized layer (TPC) and analytically explain a proposed framework for tasks involving irregularly sampled, partially observed and multivariate time series.
# 3.1 Preliminaries
Convolution is a well-studied mathematical operation which has applications in many diverse scientific g, expresses how the shape of one is fields [1]. The convolution of two functions f and g, denoted by f modified by the other.
Continuous convolution. the integral of the product of the two functions after one is reflected and shifted. Formally, given f : RD and g : RD
â | 2308.03210#10 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 11 | Continuous convolution. the integral of the product of the two functions after one is reflected and shifted. Formally, given f : RD and g : RD
â
(f * 9)(x) = [. F(y)g(x â y)dy
# (f
Discrete convolution. the support domain of finite integer set ZD and equivalent of convolution is defined as: In the real world, signals are discrete and finite. For functions f , g, defined over D, respectively, the discrete } K, K + 1, ..., K 1, K {â â â
(fe gin]= So flr â kolk) (1) k=â-K
Thus, the integral is replaced by a finite summation. Standard CNN models consist of layers that perform discrete convolutions that are defined over the discrete domain.
# 3.2 Time-Parameterized 1D Convolutions
We first introduce the key notations behind the employed time-parameterized convolutions for irregular and multivariate time series and analyze their fundamental properties. | 2308.03210#11 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 12 | [11] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage mod- els are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â1901, 2020. [12] OpenAI, âGpt-4 technical report,â 2023. [13] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., âTraining language models to follow instructions with human feedback,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 730â27 744, 2022. [14] A. K¨opf, Y. Kilcher, D. von R¨utte, S. Anagnostidis, Z.-R. Tam, K. Stevens, A. Barhoum, N. M. Duc, O. Stanley, R. Nagyfi et al., âOpe- | 2308.03022#12 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 12 | Irregular time series and standard CNNs. Let {Kâ,...,X)} be a collection of multivariate time series where X ⬠Râ¢*Â¥ for all i ⬠{1,...,N}. Thus, each time series consists of m channels and has a length (i.e., number of observations) equal to L which corresponds to the observation times {t1,t2,...,tz}. Let also d(-,-) denote a function that measures the distance (in time) between observations of a single channel of the collection of time series. The convolution operation of standard CNNs assumes that consecutive observations are equally spaced across all samples, and thus, the weights of the different kernels of standard CNNs are fixed across all chunks of the time series. In other words, the summation in the right part of Equation (i) is performed over the elements of the same set for all n. Formally, we have that d(X{),XÂ¥} 41) = T holds for alli ⬠{1,...,m},j ⬠{],..., Lâ1} and i,j ⬠{1,...,.N} where N is the number of samples. However, the above does not necessarily hold in the case of irregularly sampled time series data. Indeed, irregular | 2308.03210#12 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 14 | [15] T. Eloundou, S. Manning, P. Mishkin, and D. Rock, âGpts are gpts: An early look at the labor market impact potential of large language models,â arXiv preprint arXiv:2303.10130, 2023.
[16] Y. Leng, X. Tan, L. Zhu, J. Xu, R. Luo, L. Liu, T. Qin, X. Li, E. Lin, and T.-Y. Liu, âFastcorrect: Fast error correction with edit alignment for automatic speech recognition,â Advances in Neural Information Processing Systems, vol. 34, pp. 21 708â21 719, 2021.
[17] W. Hou, J. Wang, X. Tan, T. Qin, and T. Shinozaki, âCross-domain speech recognition with unsupervised character-level distribution match- ing,â INTERSPEECH, 2021.
[18] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, I. Stoica, impressing gpt- [Online]. Available: | 2308.03022#14 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 14 | 4
Time-parameterized convolutional kernels. To deal with the irregularity of time series, we propose to use time-parameterized kernels. Thus, instead of a fixed kernel that slides over the patches of the time series, we use a parameterized kernel whose components are functions of time. The kernel is also parameterized by N0 where the weights of a neural network. We constraint the size of the kernel to be equal to 2z + 1 where z N0 denotes the set of natural numbers together with zero. Then, the elements of the kernel are constructed by some function g(θ, ât) where θ denotes some trainable parameters and ât denotes the distance (in time) of the observation associated with some element of the kernel and the z + 1-th observation. Formally, the convolution is defined as follows:
2241 2241 (f*9)(t) = D> fltig(,t - ti) = SO f(ti)g(0, At) (2) i=1 i=1
where t1, . . . , t2z+1 are the timestamps associated with the observations of the patch the kernel is applied to. The function g(θ, ât) is quite general and can have different forms. In this paper, we focus on inter- R is defined as follows:
4 Ad") | 2308.03210#14 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 15 | [19] M. R. Ali, S. Z. Razavi, R. Langevin, A. Al Mamun, B. Kane, R. Rawassizadeh, L. K. Schubert, and E. Hoque, âA virtual teens with autism spectrum disorder: conversational Experimental results and design lessons,â in Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, ser. IVA â20. New York, NY, USA: Association for Computing Machinery, 2020. [Online]. Available: https://doi.org/10.1145/3383652.3423900 [20] S. Z. Razavi, M. R. Ali, T. H. Smith, L. K. Schubert, and M. E. Hoque, âThe lissa virtual human and asd teens: An overview of initial experiments,â in Intelligent Virtual Agents, D. Traum, W. Swartout, P. Khooshabeh, S. Kopp, S. Scherer, and A. Leuski, Eds. Cham: Springer International Publishing, 2016, pp. 460â463. | 2308.03022#15 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 15 | 4 Ad")
a( [a % Os 4 Ad") =61((n(os ar) +6)
# g
where h : R R denotes some activation function (i.e., sigmoid, ReLU, etc.). Thus, to construct each element of the kernel, function g takes as input four trainable parameters (i.e., θ1, θ2, θ3 and θ4) and the time difference between the current observation and the center observation of the patch. Function h is chosen such that inductive bias is injected into the model. This can allow the model to capture patterns that commonly occur in time series data and also make its internal operations more interpretable. For example, a function h(x) = c where c is some constant would not be a good candidate for extracting useful features from the time series. On the other hand, we employ more informative functions which can capture useful properties of time series such as trend and seasonality. In particular, we employ the following ten functions:
1. h1(x) = x 6. h6(x) = x2
2. h2(x) = sin(x) 7. h7(x) = x3
3. h3(x) = cos(x) 8. h8(x) = sinh(x)
4. h4(x) = tan(x) | 2308.03210#15 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 16 | [21] M. Fung, Y. Jin, R. Zhao, and M. E. Hoque, âRoc speak: Semi- automated personalized feedback on nonverbal behavior from recorded videos,â in Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp â15. New York, NY, USA: Association for Computing Machinery, 2015, p. 1167â1178. [Online]. Available: https://doi.org/10.1145/2750858.2804265
[22] M. E. Hoque, M. Courgeon, J.-C. Martin, B. Mutlu, and R. W. Picard, âMach: My automated conversation coach,â in Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp â13. New York, NY, USA: Association for Computing Machinery, 2013, p. 697â706. [Online]. Available: https://doi.org/10.1145/2493432.2493502
[23] S. Z. Razavi, L. K. Schubert, K. van Orden, M. R. Ali, B. Kane, interacting in multiple topics,â ACM Trans. jul 2022. [Online]. Available: | 2308.03022#16 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 16 | 3. h3(x) = cos(x) 8. h8(x) = sinh(x)
4. h4(x) = tan(x)
9. h9(x) = cosh(x)
5. h5(x) = exp(x)
10. h10(x) = tanh(x)
Most of the time, trend is a monotonic function, and therefore, functions h1, h6 and h7 are chosen to detect trend in time series. Seasonality is a typical characteristic of time series in which the data experiences regular and predictable changes that recur over a defined cycle. Functions h2, h3, h9 and h10 are responsible for extracting features that take seasonality into account. | 2308.03210#16 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03022 | 17 | [24] M. R. Ali, D. Crasta, L. Jin, A. Baretto, J. Pachter, R. D. Rogge, and M. E. Hoque, âLissa â live interactive social skill assistance,â in 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015, pp. 173â179.
[25] M. R. Ali, T. Sen, B. Kane, S. Bose, T. M. Carroll, R. Epstein, L. Schubert, and E. Hoque, âNovel computational linguistic measures, dialogue system and the development of sophie: Standardized online for healthcare interaction education,â IEEE Trans. Affect. patient Comput., vol. 14, no. 1, p. 223â235, jan 2023. [Online]. Available: https://doi.org/10.1109/TAFFC.2021.3054717 | 2308.03022#17 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | [
{
"id": "1703.10135"
},
{
"id": "2304.07327"
},
{
"id": "2303.10130"
}
] |
2308.03210 | 17 | The approach presented above generates kernels for univariate time series. In the case of multivariate time series, different parameters are learned for the different components of the time series. Therefore, the Rm. Thus, four parameters (θ1, θ2, θ3 and θ4) are replaced by vectors of dimension m, i. e., θ1, θ2, θ3, θ4 function g(θ, ât) : R4m+1 Rm is computed by applying function h( ) pointwise to m different elements. · Note that ât is still a scalar since observation times are identical across all components of the series.
# 3.3 The Time-Parameterized Convolutional (TPC) Layer
Given a sample X(i), its corresponding observation times g, the kernel centered at the j-th observation (i. e., X(i) t1, t2, . . . , tL { , and a time-parameterized function } :,j ) is constructed as follows:
5 | 2308.03210#17 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 18 | 5
Patch Observation time Difference in time Kernel X(i) :,jâK tjâK âtjâK g(θ, âtjâK ) . . . . . . . . . . . . X(i) :,j tj 0 g(θ, 0) . . . . . . . . . . . . X(i) :,j+K tj+K âtj+K g(θ, âtj+K )
Note that X(i) convolution is computed as follows: :,j denotes the j-th column of matrix X(i). Once we construct the kernel, the output of the
m c=) 2G, Ati) Xf) pe +--+ D> G(9,0). Kf) +... l=1 l M: Il a + > 9, Ath XO Lj+kK M: l Il a
# l,j+K
R. In some cases, features of the multivariate time series might be missing. In such cases, the where c above operation would compute the sum of a smaller number of terms (since missing features are ignored). Thus, we also experimented with the mean function: | 2308.03210#18 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 19 | fe m ; c= £(So0.as-1)1%)5 + + 9(0,0). Xf? +... 1=1 l=1 (3) m t Ss 99, Atj+K)i X.1) l=1
where ν denotes the actual number of features (out of the (2K + 1)m features, those that are not missing). Thus, the convolution between a sequence of observations and the kernel outputs a real number. We use RL. Furthermore, zero padding and apply the kernel to all observations and, therefore we obtain a vector c similar to standard CNNs, not a single kernel, but instead a collection of kernels is generated and applied to the input. These kernels might correspond to different functions of the ones defined above (i. e., h1, . . . , h10). Suppose that we use p different kernels in total (potentially of different functions). Then, the output of the TPC layer of the multivariate and irregularly sampled time series X(i) is computed as:
TPO(X,t) = ||P co, E RU*P
â
is the concatenation operator between vectors and t(i) is a vector that stores the observation times where of the time series. â¥
# 3.4 Properties of TPC Layer | 2308.03210#19 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 20 | â
is the concatenation operator between vectors and t(i) is a vector that stores the observation times where of the time series. â¥
# 3.4 Properties of TPC Layer
Constant number of parameters An interesting property of the TPC layer is that the number of parameters of each kernel is constant and equal to 4m regardless of the size of the kernel. This is because the kernel is dynamically generated based on the observation times and only 4m trainable parameters are involved. This is in contrast to standard convolutional layers where the number of parameters is equal to the size of the kernel plus the bias. Thus, the number of parameters of the TPC layer will be less than the number of parameters of a standard convolutional layer when the size of the kernels is greater than 4. This is likely to lead to less complex models and might significantly reduce overfitting.
(Lâmp) for kernel Time Complexity. The time complexity of the proposed TPC layer is approximately size â, similar to the vanilla 1D convolution. Since TPC relies on convolutions, that take advantage of parallel computations, it can be trained faster than recurrent neural network architectures. The complexity comparison becomes even more advantageous when compared with continuous-time models, such as neural ODEs that are significantly slower than RNNs [14].
6 | 2308.03210#20 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 21 | 6
, \ 4 ' 1 1 1 2 1 1 â ' an(t) Ae \ 1 / i : Doak = 1 1 ' â ' ran A 1 a(t) |v ' a(t) | Tinear ' â 1 1 1 ' {tists ...stu} ' {tista,...5tz} i al TPc | : am(t) ' 1 ' ' f Mask â ' 1 1 Â¥ 1 a(t) |/ 1 ' ' t TPC } 1 1 ' 1 1 1 \ ' Figure 1: (Left) An encoder that consists of the proposed TPC layer, convolutions and pooling layer and produces a fixed-size latent representation z. (Right) An encoder-decoder framework that reconstructs the series from the input using TPC and linear layers. | 2308.03210#21 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.