id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.12682#6
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
The displayed grid is purely illustrative, with no visual inputs used. our knowledge, is the first of its kind (§ 4). (2) We incorporate feasibility and cost-effective elements into the generated plans using a joint scoring named SayCanPay. As shown in Figure 1, it guides the planning through three key steps: (i) Say: Given a goal and an initial observation, the LLM generates likely candidate actions at each step; (ii) Can: An affordance model scores these actionsâ feasibility, mirroring the evaluation of preconditions; (iii) Pay: Another model scores the actions according to their estimated payoff, akin to heuristic estimators (§ 5). The Can and Pay models undergo domain-specific training to align the plans with the current environment (§ 6). (3) Using this combined score as a heuristic, we search for the most feasible and cost-effective plan (§ 5.2). We demonstrate how our proposed joint scoring and heuristic search improve over the current LLM planning frameworks (§ 7.3). # 2 Related Work on Planning with LLMs Model I/O Planner Domain Knowledge Affordances Heuristics Search Planning HSP (Bonet and Geffner 2001) LLM+P (Liu et al. 2023) Planning LM (Huang et al. 2022a) SayCan (Ahn et al. 2022) Grounded Decoding (Huang et al. 2023) Text2Motion (Lin et al. 2023) ProgPrompt (Singh et al. 2023) Plansformer (Pallagani et al. 2022) SayCanPay (Beam-Action) Symbolic Hybrid NL NL NL NL Symbolic Symbolic NL Symbolic Symbolic LLM LLM LLM LLM LLM LLM LLM â
2308.12682#5
2308.12682#7
2308.12682
[ "2302.13971" ]
2308.12682#7
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
â â â â â â â â â â â â â â â â â Heuristic Heuristic Greedyâ Greedyâ Greedyâ Greedyâ Greedyâ Greedyâ Heuristic Offline Offline Offline Online Online Online Offline Offline Offline Table 1: Table contrasts SayCanPay with existing works. I/O: input (goal/task, observation/state) / output (actions), NL: natural language. Here, Greedyâ suggests the algorithm greedily selects actions while (possibly) searching over tokens. Table 1 categorizes LLM planning works into two broad categories based on whether the inputs (goals, states) and output actions (I/O) are natural language (NL) or symbolic (PDDL, scripting language). The approaches in the first category (Huang et al. 2022a; Valmeekam et al. 2022) often fail to model action affordances and the state of the world, leading to the generation of infeasible plans (Valmeekam et al. 2022). To improve the groundedness, recent works have explored planning guided by learnable domain-specific models that score the actionsâ feasibility akin to preconditions (Huang et al. 2023; Lin et al. 2023). Notably, SayCan (Ahn et al. 2022) uses pretrained low-level skills to ground the LM-generated actions. Others have used online planning with environmental and human feedback (Huang et al. 2022b). A limitation of such models, however, is their short-sighted nature, as they focus greedily on the next feasible action without considering its long-term relevance to the goal. Moreover, the plans are generated in an online fashion, interleaving action generation and execution, thus simplifying state tracking. In contrast, SayCanPay performs offline planning (i.e. complete plan generation while maintaining an internal world state) with both precondition and heuristic estimators, improving plan feasibility and cost-efficiency. @ goal gGhistory ho â best token wz @aiscarded token w, @ next best token wf Hil vest action af plldiscarded action a, lj next-best action af (9, ho) as oN Abstraction vs â > ° e te 9% ov Vy Coa) fei a vy 4 wel â
2308.12682#6
2308.12682#8
2308.12682
[ "2302.13971" ]
2308.12682#8
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
es *@? z= ahi * * w3 a3 ay (a) Greedy-Token (b) Beam-Token â y Single Greedy-Action step (©) Greedy-Action (d) Beam-Action Figure 2: The figure outlines decoding strategies â Greedy-Token, Greedy-Action, and Beam-Action. Greedy-Token greedily selects the next best token by its probability. Greedy-Action (which is a beam search over tokens) greedily selects the next best action based on a specific decoding score. Beam-Action uses a beam search over actions, main- taining k beams and selecting the best sequence as the plan. Here, nodes represent either tokens wt or actions at. The best plan is given by (aâ 3) and represented in red. The second-best node is in orange, discarded ones in black. Here, for Beam-Action, m = 3 and k = 2.
2308.12682#7
2308.12682#9
2308.12682
[ "2302.13971" ]
2308.12682#9
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Another line of work employs LLMs to create offline symbolic plans, leveraging LLMsâ training on open-source codebases, where actions appear as function calls (Singh et al. 2023; Liang et al. 2023). The feasibility of plans is ensured through assertion checks (assert â ¨ preconditions â ©), that may trigger recovery actions. However, it relies solely on the LLMâ s domain knowledge which is limited to its training data and may not be aligned with the agentâ s current environment (e.g. espresso machine operations vary widely). Conversely, SayCanPay uses additional models trained with domain-specific knowledge collected from the current environment. There are also efforts to fine-tune LLMs like Code-T5 (Wang et al. 2021) to generate plans in PDDL (Pallagani et al. 2022). This requires a significant amount of training data (given LLMsâ minimal PDDL exposure) which is not entirely justified by their performance. Yet another exciting line of work explores hybrid I/O systems like LLM+P (Liu et al. 2023) wherein, given a PDDL domain file (with a predefined action model), the LLM maps the NL inputs (task description, input observation) to a PDDL problem file. A symbolic planner then generates the plan. However, its effectiveness is limited by the closed- world constraint of the domain file, the necessity for fully observable states, and the LLMâ s restricted capability in translating NL to PDDL (Xie et al. 2023). 3 Preliminaries Planning Framework. We formulate our planning problem, based on approximate planning (Golowich, Moitra, and Rohatgi 2022), as a finite-horizon Partially Observable Markov Decision Process (POMDP) given by the tuple â
2308.12682#8
2308.12682#10
2308.12682
[ "2302.13971" ]
2308.12682#10
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
¨S, SG, b0, A, O, R, Tâ ©. Here, S is state space, SG â S is a set of goal states, b0 is the initial belief state, A is the set of actions, O is a set of observations retrieved from states via an observation function O, R : O â R is a known reward function, T : S Ã A â â S is a known stochastic transition function and â S is a distribution over states. Belief states represent the agentâ s knowledge of the environment at any point, given as b â â S . Additionally, let Ht := (A Ã O)tâ 1 denote the set of histories at step t, namely the set of action/observation sequences (o0, a1, o1, . . . , atâ 1, otâ 1) or (a1:tâ 1, o0:tâ 1) the agent has access to before selecting action at. It is assumed that the goal states are fully observable. Unlike MDPs, the optimal policy in a POMDP typically takes actions depending on not just the most recent observa- tion but the entire history. The objective of the planning algorithm is to find the optimal sequence of actions a1:T (i.e. an optimal plan) from an initial belief state b0 to a given goal state g â
2308.12682#9
2308.12682#11
2308.12682
[ "2302.13971" ]
2308.12682#11
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
SG. Here, T is the length of the horizon. Heuristic Search Planning. In real-world scenarios where the state space can be exponentially large to explore exhaustively, heuristic search planning (HSP) becomes useful (Bonet and Geffner 2001). Essentially, it uses heuristic functions fheur : Ht à SG â R to guide the search process in the planning problem, by computing a cost estimate from a given history of actions and observations. An example is the Best-First Search algorithms that select the most promising (next) action(s) using a linear combination of previously accumulated cost facc for history htâ 1, and the estimated cost fheur from updated history ht = (htâ 1, at) and goal g. f (ht) = z1 · facc(htâ 1) + z2 · fheur(ht, g) (1) Here z1, z2 â {0, 1}. The next action at = arg minht f (ht). Special cases are the Aâ algorithm algorithm (z1 = 1 and z2 = 1) and Greedy Best-First Search (z1 = 0 and z2 = 1). 4 Language Model Planning Framework We keep the same POMDP formulation while updating our interpretations of the tuple. Previous works have shown that language models (LMs) trained on extensive data would internalize rich world knowledge that can be queried for downstream tasks like planning (Hao et al. 2023). This is akin to an internal transition function Tint. Similarly, LMs also maintain and update an internal belief state bint over tokens (or actions). An observation function maps states 0 , A, O, R, Tintâ ©. In our offline to NL observations, O : S â O. The updated POMDP is now given as â ¨S, SG, bint 0 = 1s0 , while ot = â â t > 0, planning experiments, we assume the following: (i) O = {o0} inducing belief state bint due to lack of environmental feedback; (ii) sparse rewards = 1 for plan success, else 0. While our LM does not utilize the reward function, one could use it for alignment (Ziegler et al. 2020). Problem Statement:
2308.12682#10
2308.12682#12
2308.12682
[ "2302.13971" ]
2308.12682#12
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Given a NL goal g, history h0 = (o0), and a LM generating actions at with probability p(at|htâ 1, g), generate the most likely plan (a1:T ) to go from bint We aim to maximize the planâ s probability, reframing LM planning as a classical search problem, where we repeatedly expand the current plan a1:tâ 1 by adding action at. Rewriting the probability P (a1:T |h0, g) recursively as: = P (a1:tâ 1, at, at+1:T |h0, g) = p(a1:tâ 1|h0, g)p(at|h0, a1:tâ 1, g)p(at+1:T |h0, a1:t, g) = p(a1:tâ 1|h0, g) · p(at|htâ 1, g) · p(at+1:T |ht, g) To align with Eq 1 of the planning problem, we take log on both sides and maximize rather than minimize. We get accumulated value facc(htâ 1) = log p(a1:tâ 1|h0, g), heuristic payoff fheur(ht, g) = p(at+1:T |ht, g), and f (ht) = log P (a1:T |h0, g). Rewriting the above equation: f (ht) = face(heâ 1) + log (p(ar|he-1,9) : Freur(he, 9)) (2) The additional p(at|htâ 1, g) reflects that, unlike classical planning which evaluates only feasible actions based on preconditions, LMs assign probabilities to each action. Here, next action at = arg maxht f (ht). Technically, the LM generates actions wherein each action is a sequence of tokens until the end-of-sequence token, (EOS). For each action step a = (w},...,Wn) composed of tokens w;, the LM computes the action probability as p(a) = p(wi) [Ti p(wilwi.iâ
2308.12682#11
2308.12682#13
2308.12682
[ "2302.13971" ]
2308.12682#13
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
1). Planning LM proposed a greedy decoding strategy wherein the LM greedily picks the next token, henceforth referred to as Greedy-Token baseline (Figure[2|Left). The generated action is then appended to the history hi= (hi, az), and the generation process repeats until a â done taskâ action is generated. Subsequent works (Lin et al. have investigated beam search over tokens. However, we are mainly interested in searching on the level of actions and not tokens. 5 SayCanPay Inference The core concept of SayCanPay is to guide LMs in generating feasible and cost-effective plans. The process unfolds in three key steps: (1) Say: At each step t, the LM generates the top-m candidate actions with associated probabilities {p(ai i=1. This generation employs a beam search over tokens. (2) Can: Next, a trained domain-specific model weighs these candidate actions on their feasibility, mirroring precondition evaluation. (3) Pay: Finally, a trained domain-specific estimator weighs the candidate actions according to their estimated payoff. The probabilities from these three components are then combined to select the next action.
2308.12682#12
2308.12682#14
2308.12682
[ "2302.13971" ]
2308.12682#14
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
An overview of SayCanPay is provided in Figure 1. In what follows, we instantiate the LM planning problem with two decoding strategies (or search algorithms that select the next action(s)): Greedy Action (§ 5.1) and Beam Action (§ 5.2). Each strategy is explored using three distinct decoding scores (i.e. score used by the search algorithm to select the next action) â Say, SayCan, SayCanPay. We then elaborate on the training of Can and Pay models (§ 6). # 5.1 Greedy-Action In this decoding strategy, we maintain a single action sequence and at each step, greedily choose the next best action based on a specific decoding score. This is akin to performing Greedy Best-First Search with z1 = 0 and z2 = 1. The decoding score for each candidate action ai is given as: = log 9)) f (hi t|htâ 1, g) · fheur(hi t) denotes the current history with ith candidate action. As shown in Figure 2, this approach can be viewed as being â greedyâ with respect to actions while using â beamsâ over the tokens. Now, we explore three variations of the strategy based on how the decoding score is computed.
2308.12682#13
2308.12682#15
2308.12682
[ "2302.13971" ]
2308.12682#15
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
â ¢ Say: In this decoding score, we set the estimated payoff fheur(hi t, g) = 1 â i â {1, . . . , m}. Hence, the action is selected solely based on the LM generation probability, without considering feasibility or payoff. f (hj) = log ( p(aj|hiâ 1,9) ) (3) =p at â ¢ SayCan: Here, the action feasibility is also considered. Let, Ï t = (at, pre(at)) where pre(at) denotes the precon- ditions of at. The â canâ probability2, is denoted by p(pre(at)|htâ 1, g). Again, fheur(hi t, g) = 1 â i. (hi) = log (p(oj|hu-1,9)) = log ( p(aj|hr-1, 9) -p(pre(ai) )) (4) =P. t % =p # f (hi â ¢ SayCanPay: This decoding score accounts for the estimated payoff in addition to the abovementioned scores. Hence, the best action is selected based on a combined score of Say, Can, and Pay scores.
2308.12682#14
2308.12682#16
2308.12682
[ "2302.13971" ]
2308.12682#16
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
log (p(aj|heâ 1, 9) p(pre(a;)|Re-1, 9) > feur(hi, 9) ) (5) a ; == Yay =p =p a a 5.2 Beam-Action In heuristic planning, multiple potential plans (i.e. action sequences) are simultaneously maintained and iteratively expanded until the goal is achieved. To simulate this behavior, we propose to manage k action sequences. It works as follows â each sequence is expanded with m candidate actions (where m â ¥ k) from the LM, resulting in a total of kà m sequences. Then, top-k sequences are retained using a specific decoding score accumulated over the sequence, as shown below. Once all k-beams have terminated, we select the sequence with the highest (length-normalized)3 accumulated score. To avoid repetition, we only show the SayCanPay version. The rest can be similarly formulated. 1 ; top-k Fa (Justis) + logp(o!|hi_y,9) Here, i â ¬ {1,...,},j â ¬ {1,...,m}, k < m. The updated history h. â J al to the iâ beam history hi.
2308.12682#15
2308.12682#17
2308.12682
[ "2302.13971" ]
2308.12682#17
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
The outcome becomes the value for k = 1 results in Greedy-Action decoding. tâ 1) + log p(Ï j tâ 1, g) · fheur(hij facc(hi t |hi top-k t , g) tâ 1, aj t = (hi t ) is obtained by adding the action tâ 1. The outcome becomes the value for facc(ht) for the next iteration. Note, that setting Our proposed decoding has similarities with Tree-of-Thoughts inference (Yao et al. 2023) which also maintains multiple reasoning paths to decide the next step. However, our method is specifically tailored for planning problems. It uses search and evaluation techniques akin to planning methods, making it more suited for such challenges. Now, we discuss the training details of the Can and Pay models. # 6 Learning the Can and Pay Models To train our domain-specific Can and Pay models, we collect N -expert trajectories E = {Ï }N using an oracle planner, where Ï i = (o0, g, a1, a2, . . . , aT , r). Note, r = 1 for all expert trajectories. # each environment 6.1 Can Model We model it as a classification problem, where the positive action (i.e., the action whose preconditions are satisfied) is assigned the highest probability from a set of one positive and a few negative actions. Specifically, we sample a batch of actions [htâ 1, g, at, a¯t̸=t, Ë a]1:B from expert trajectories E. We then train a model Mcan with the aim of minimizing the InfoNCE loss (van den Oord, Li, and Vinyals 2019):
2308.12682#16
2308.12682#18
2308.12682
[ "2302.13971" ]
2308.12682#18
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
# Vinyls2079}: 1 Ma"(hi_1,9' a) B > log s i=1 a} M"(hi_4,g', a) ae{aj.ai, Here, B is the batch size, at is the positive action from trajectory Ï i executed in the context of history htâ 1 with goal g, a¯t̸=t is a negative action sampled from the same trajectory Ï i, but at a different time-step ¯t, and Ë a is a negative 2The goal g is used to evaluate the preconditions of â done taskâ . 3Since different beams can have different sequence lengths. Environment Example Goal Example Initial Observation Plan Length Ravens (Tower of Hanoi seq) Move the gray disk in rod 2 Blue disk on top of gray disk. Gray disk on top of green disk. Green disk in rod 1. The disks can be moved in rod 1, rod 2, rod 3. 3.3 Ravens (Put Blocks in Bowls) Put the yellow blocks in gray bowls There is a gray bowl 1, gray bowl 2, gray bowl 3, yellow block 1, yellow block 2, yellow block 3, blue bowl 1, red block 1, green bowl 1, orange block 1. 6.1 BabyAI (Pickup) Pick up the ball Room 1 has purple ball. Room 2 has yellow key, agent. Room 3 has red key. The door connecting Room 1 and Room 2 is locked. The door connecting Room 2 and Room 3 is locked. 6.7 VirtualHome Read book 5.9 |A| 7.5 25 7.7 150 Table 2: Table displays tasks from each environment, average plan length, and average action space size |A|. For VirtualHome, we do not specify an initial observation since it is hard to describe a room environment. Here, the action space varies with episodes, depending for instance on the number of objects.
2308.12682#17
2308.12682#19
2308.12682
[ "2302.13971" ]
2308.12682#19
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Setup Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAI (pickup) VirtualHome Say Model Greedy-Token Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 45 30 30 96 59 0 0 0 Say 48 30 51 96 62 0 32 0 Greedy-Action SayCan 48 39 52 96 81 30 49 30 SayCanPay 50 42 54 96 88 36 52 48 Say 54 38 52 98 72 1 48 30 Beam-Action SayCan 68 50 52 98 94 36 52 41 SayCanPay 70 50 56 98 94 30 53 50 Table 3: Table shows the planning success (i.e. # plans out of 100 that reached the goal within limited steps) on the test split across different environments using Vicuna, Flan-T5 models. It can be observed that the best decoding strategy is Beam-Action and the best decoding score is SayCanPay. action sampled from a different trajectory Ï j̸=i with a different initial observation o0 and goal g. Mcan consists of an uncased Bert model (Devlin et al. 2019) with a probe layer and is trained end-to-end to correctly identify the positive action. The input to Mcan is of the format â â ¨Goalâ ©{g} â ¨Historyâ ©{htâ 1} â ¨NXTâ ©{at}â . Here, â â ¨â â ©â serves as special := Mcan(htâ 1, g, at).
2308.12682#18
2308.12682#20
2308.12682
[ "2302.13971" ]
2308.12682#20
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
The model is trained across multiple batches tokens. The output is the Can probability pcan at for F1-score convergence on the validation set. Our approach is different from SayCan (Ahn et al. 2022) which trains multiple affordance functions (corresponding to different skills), through temporal-difference-based reinforcement learning to predict the likelihood of a particular skill succeeding (i.e., executing) in the current state. Here, we show two training I/O examples, one with positive action and another one with negative action. Input â ¨Goalâ © pick up the purple box. â ¨Initial Stateâ © Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. â ¨Step 1â © pick up yellow key. â ¨NXTâ © toggle yellow door. Output 1.0 Input â ¨Goalâ © pick up the purple box. â ¨Initial Stateâ © Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. â ¨Step 1â © pick up yellow key. â ¨NXTâ © pick up purple box. Output 0.0 6.2 Pay Model We model it as a regression problem to estimate action payoffs. Using expert trajectories E, we create a dataset with each batch as [g, htâ 1, at, r]1:B. Given sparse rewards (i.e. rT = 1), we use temporal discounting δ â (0, 1) to assign discounted rewards to previous actions in the trajectory4. This ensures that actions closer to the end receive higher rewards and vice versa. Specifically, rT â 1 = δ, rT â 2 = δ2, and so on.
2308.12682#19
2308.12682#21
2308.12682
[ "2302.13971" ]
2308.12682#21
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
We also sample negative actions from other paths (akin to the Can model) with a reward of 0. The model is trained to align the discounted reward of the action and the predicted reward from Mpay by minimizing the mean squared error (MSE) loss 1 t))2. B The model uses an uncased Bert plus a regression layer whose output is bounded in [0, 1] via a sigmoid activation. The 4δ for the Pay model training is unrelated to the POMDP. Setup Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAI (pickup) VirtualHome Say Model Greedy-Token Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 12 34 16 63 48 0 0 0 Say 24 34 36 65 50 0 14 0 Greedy-Action SayCan 55 46 40 71 53 26 23 6 SayCanPay 58 47 48 74 54 28 29 15 Say 20 38 38 67 56 1 20 4 Beam-Action SayCan 47 54 42 74 56 30 26 19 SayCanPay 52 56 56 74 62 34 30 26 Table 4: Table shows the cost-effectiveness (i.e. #plans out of 100 that reached the goal within limited steps and also had the same plan length as the expert plan) on the test split across different environments using Vicuna, Flan-T5 models. It can be observed that the best decoding strategy is Beam-Action and the best decoding score is SayCanPay.
2308.12682#20
2308.12682#22
2308.12682
[ "2302.13971" ]
2308.12682#22
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Setup Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAI (pickup) VirtualHome Say Model Greedy-Token Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 32 24 8 94 0 0 0/20 0/20 Say 30 22 30 94 1 1 2/20 0/20 Greedy-Action SayCan 18 18 10 26 4 28 3/20 0/20 SayCanPay 18 16 6 18 12 28 3/20 3/20 Say 27 26 30 96 9 1 5/20 1/20 Beam-Action SayCan 34 26 10 22 12 15 5/20 3/20 SayCanPay 34 26 6 24 10 28 5/20 5/20 Table 5: Table shows the generalization results (i.e. the number of plans out of 100 that reached the goal) on test- generalize split across different environments using Vicuna and Flan-T5 models. It can be observed that Beam-Action outperforms other decoding strategies. input format is the same as the Can model. The output is the estimated payoff, fheur(ht, g) = Mpay(g, htâ
2308.12682#21
2308.12682#23
2308.12682
[ "2302.13971" ]
2308.12682#23
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
1, at). Input â ¨Goalâ © pick up the purple box. â ¨Initial Stateâ © Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. â ¨Step 1â © pick up yellow key. â ¨Step 2â © toggle yellow door. â ¨Step 3â © drop key in void. â ¨Step 4â © pick up blue box. â ¨NXTâ © done picking up. Output 1.0 Input â ¨Goalâ © pick up the purple box. â ¨Initial Stateâ © Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. â ¨Step 1â © pick up yellow key. â ¨Step 2â © toggle yellow door. â ¨Step 3â © drop key in void. â ¨NXTâ © pick up blue box. Output 0.6 Input â ¨Goalâ © pick up the purple box. â ¨Initial Stateâ © Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. â ¨Step 1â © pick up yellow key. â ¨Step 2â © toggle yellow door. â ¨Step 3â © drop key in void. â ¨NXTâ © pick up green box. Output 0 # // end of plan # 7 Experimental Setup 7.1 Say Model The Say model does not undergo any fine-tuning and is used only for inference. We experimented with two types of transformer architectures. (i) Decoder type: 13b-parameter Vicuna model (Chiang et al. 2023) trained by fine-tuning LLaMA (Touvron et al. 2023). (ii) Encoder-decoder type: Flan-T5-11b (Chung et al. 2022) which is the instruction fine-tuned version of the T5 transformer (Raffel et al. 2020). Existing works have demonstrated the planning abilities of both the decoder type (Pallagani et al. 2022) and the encoder-decoder type architectures (Valmeekam et al. 2023, 2022).
2308.12682#22
2308.12682#24
2308.12682
[ "2302.13971" ]
2308.12682#24
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Since the generated plan is in free-form language and may contain unrecognizable (for the environment) words or incorrect syntax, it cannot be directly translated into actionable steps in the environment. Following Huang et al. (2022a), we use an exhaustive list of admissible actions (feasible and otherwise), and at the end of each action step, map the generated action to the closest admissible action using minimum edit distance. Interleaving action generation and mapping ensures that all subsequent steps are conditioned on admissible actions, thus mitigating compounding errors. We provide 3 examples (input goal and observation, output plan) to the model via few-shot prompting. Planning success for different beam sizes Cost-effectiveness for different beam sizes Generalization for different beam sizes 100 100 100 k=1 k=1 k=1 Mm k=2 k=2 lm k=2 80 mm k-3 | 0 mm k-3 | 9 mm k=3 60 60 60 40 40 40 o o 0 Ravens-Hanoi Ravens-Blocks BabyAl _ VirtualHome Ravens-Hanoi Ravens-Blocks BabyAl __VirtualHome Ravens-Hanoi Ravens-Blocks BabyAl __virtualHome Figure 3: [Best viewed in color] From left to right: Planning success, cost-effectiveness, generalization for different beam sizes. Note, that generalization on the test-generalize split for VirtualHome is reported as a percentage. # 7.2 Environments We tested in three environments, detailed in Table 2.
2308.12682#23
2308.12682#25
2308.12682
[ "2302.13971" ]
2308.12682#25
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
â ¢ Ravens (Zeng et al. 2021) is a PyBullet simulated task set focusing on â pick and placeâ . It includes 10 tabletop tasks, of which we use two: (i) Tower of Hanoi (sequence), a variation of the classic puzzle focusing on specific intermediate goals, like moving a particular disk to a designated rod while keeping the traditional constraints. This creates more goal diversity; (ii) Put blocks in bowls requires placing blocks into bowls based on rules like put yellow block in green bowls. We adapt the environment for language tasks, observations, and actions.
2308.12682#24
2308.12682#26
2308.12682
[ "2302.13971" ]
2308.12682#26
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
â ¢ BabyAI (Chevalier-Boisvert et al. 2019) is a 2D-gridworld environment where a bot is provided a language task sampled from a predefined grammar. We focus on pickup tasks where the agent navigates to collect an object, often unlocking doors or moving obstacles. Task difficulty varies with rooms, obstacles, and distractor objects. The agentâ s actions include high-level commands like pickup and drop which are composed of atomic actions: â leftâ , â rightâ , â forwardâ , â pickâ , and â dropâ (see Figure 1) â ¢ VirtualHome (Puig et al. 2018) is an interactive platform to simulate complex household activities via interactions with the environment, such as picking up objects, switching on/off appliances. We utilize the VirtualHome-Env dataset (Liao et al. 2019), comprising daily household activities from 7 scenes gathered via crowdsourcing. We only use the goal as the input (see Table 2). Data Splits and Evaluation. We aim to assess the success, cost-effectiveness, and out-of-distribution (OOD) gener- alization of the generated plans. We created three data splits for each environment using expert trajectories. (i) train split for Can, Pay model training and few-shot prompting of the Say Model; (ii) test split assesses the LM plannersâ ability to generate successful plans (i.e. reach the goal within limited steps), and also the plannersâ ability to generate cost-effective plans (i.e. plans that succeed and also have the same plan length as the expert plan5). (iii) test-generalize split focuses on the generalization capabilities like handling novel initial observations (e.g., unseen colors of blocks and bowls, distractors in BabyAI), longer sequence lengths (e.g., more blocks or disks in Ravens, more rooms in BabyAI), and unseen tasks in VirtualHome. All test splits have # total episodes = 100 unless specified otherwise. Moreover, all splits are disjoint (i.e. no overlap). Baselines. At the action level, we evaluate our decoding scores (Say, SayCan, SayCanPay) using various decoding strategies (Greedy and Beam-Action). Therefore, our baselines employ a mix of these strategies and scores. For tokens, we use the Greedy-Token decoding strategy as a reference.
2308.12682#25
2308.12682#27
2308.12682
[ "2302.13971" ]
2308.12682#27
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Notably, Greedy-Action SayCan is the offline planning version of the original SayCan paper (Ahn et al. 2022). Training and Inference Details. We use 800 expert train trajectories for each Ravens task and 400 for BabyAI. For VirtualHome, we retained â 800 compatible trajectories for the current simulator. An additional 100 expert trajectories were collected for each test split (20 for VirtualHome test-generalize). The Can and Pay models were trained on 7 NVIDIA-DGX V-100 GPUs using the Distributed Data-Parallel framework across 20 epochs. Training parameters included a 1e-4 learning rate, AdamW optimizer with 1e-5 weight decay, a batch size of 50, a train-validation split of 80-20. For inference, the Say model was loaded using Model Parallel on the same GPUs. Inference hyperparameters are listed in Table 6. Parameters like beam groups and diversity penalty encourage diversity among the beams, thus avoiding multiple similar sequences. We used 8-bit precision for GPU-efficient model loading (Dettmers et al. 2022). 5We split test into two parts of 100 samples to evaluate success, cost-effectiveness. For VirtualHome, we use the annotated plans from its dataset. = Greedy-Token = Greedy-Action SayCan = Beam-Action Say = Beam-Action SayCanPay = Greedy-Action Say == Greedy-Action SayCanPay == Beam-Action SayCan Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAl VirtualHome Relative Length o ind ° o Pr BR ® oO ° o Figure 4: [Best viewed in color] The error plot represents the variance in relative length over models Vicuna and Flan- T5. Due to the open-ended nature of VirtualHome, the crowdsourced trajectories are not optimal, which explains why certain cases have a relative length > 1.0. Note that Greedy-Token decoding in VirtualHome has a relative length = 0 since no generated plans were executed successfully for both Vicuna and Flan-T5. 7.3 Results We analyze the results along the following axes: decoding strategies, decoding scores, and transformer architectures. We assessed planning success and generalization by executing the generated plans in simulators such as Ravens and BabyAI, which have built-in validation checks to determine goal achievement.
2308.12682#26
2308.12682#28
2308.12682
[ "2302.13971" ]
2308.12682#28
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
For the more open-ended VirtualHome environment, we manually reviewed fully executed plans to ensure they met the intended task objectives. For cost- effectiveness, we acquired expert trajectories for each test sample using an oracle planner. Comparing decoding scores. From Tables 3, 4, the performance across various decoding scores can be summarized as Say < SayCan â ¤ SayCanPay. (i) planning success: The SayCanPay and SayCan scores lead to comparable per- formances, often outperforming Say. The Pay modelâ s minor performance edge could be due to its focus on selecting actions based on long-term relevance, potentially avoiding irreversible (breaking an egg) or even absorbing states (dis- charged cellphone) from where it is impossible to reach the goal (i.e. planning is non-ergodic). (ii) cost-effectiveness: SayCanPay leads to a significant improvement over both Say (â 11 â 97% for Beam-Action) and SayCan (â 0 â 33% for Beam-Action and â 1 â 150% for Greedy-Action). (iii) generalization: From Table 5, while the overall perfor- mance for SayCan and SayCanPay improves over Say, a noticeable drop in performance was observed for Ravens. This led to the hypothesis that the learned domain models (Can, Pay) are not generalizing to OOD data in certain environments (see § 7.5 for potential solutions). Comparing decoding strategies. From Tables 3, 4, 5, the overall performance across decoding strategies follows the pattern: Greedy-Token < Greedy-Action < Beam-Action across all splits. The Beam-Action Say, SayCan, and SayCanPay versions show improvement over their corresponding Greedy-Action counterparts. (i) planning success: Beam-Action SayCanPay beats Greedy-Action SayCanPay by â
2308.12682#27
2308.12682#29
2308.12682
[ "2302.13971" ]
2308.12682#29
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
1 â 40%. Similar gains are also observed with other decoding scores. (ii) cost-effectiveness: Beam-Action SayCanPay improves over Greedy-Action SayCanPay by â 0 â 73%. (iii) generalization: Beam-Action SayCanPay beats Greedy-Action SayCanPay by â 0 â 89%. Comparing Transformer Architectures. We did not observe a consistent performance gain for any particular archi- tecture, suggesting that either is apt for planning. We lack a definitive explanation, and further research is required to understand how each LM component impacts reasoning.
2308.12682#28
2308.12682#30
2308.12682
[ "2302.13971" ]
2308.12682#30
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
7.4 Ablation Details â ¢ Effect of beam-size k: As seen in Figure 3, in general, both plan success and cost-effectiveness increases with increase in beam size with k = 1 (Greedy-Action), 2, 3 (Beam-Action). All experiments used the SayCanPay decoding score. However, no clear patterns were observed for generalization results. Impact of Say Model: Planning failures may arise because the Say model fails to propose a right action amongst the candidates, making Can and Pay ineffective. We studied the Say modelâ s impact on overall performance using a Perfect Say that always recommends the correct action along with random distractors. From Table 7, we observed 16-84% improvements in SayCan and SayCanPay performance across various environments, indicating the potential of an improved Say model. Thus, using a larger model trained on more data could potentially enhance performance. â ¢ Plan length comparison: We compute a relative length= oracle plan length / generated plan length, which compares the generated and oracle plan lengths. A value = 1 indicates equal lengths and a value = 0 that the plan length is infinity (i.e. an unsuccessful plan). As shown in Figure 4, Beam-Action slightly improves over Greedy-Action. Furthermore, SayCanPay scoring achieves the best relative length (â 1) for both Greedy and Beam-Action strategies signifying the cost-efficiency of the generated plans. â ¢ Impact of problem size on planning time. Effect of action space: Planning time remains unaffected since the Say model generates rather than discriminates between actions. Effect of plan length: Greedy-Token run time increases by â ¼2s for each action step. Effect of decoding strategy: â ¼9s for Greedy-Token, â ¼17s for Greedy-Action, â ¼35s for Beam-Action. Effect of decoding score: Planning time is unaffected since the Can and Pay are small LMs with negligible overheads. Quantization techniques and advanced hardware can further reduce run time and is an active research area (Dettmers et al. 2023; Frantar et al. 2023).
2308.12682#29
2308.12682#31
2308.12682
[ "2302.13971" ]
2308.12682#31
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
â ¢ Qualitative Analysis: The Can model effectively selects feasible actions (Figure 1). The Pay model prioritizes actions that lead to quicker goal achievement. While Pay gives a high probability to the â done taskâ action linking it to plan completion, the Can score negates it due to unsatisfied â done taskâ preconditions. Parameter Value Exceptions max new tokens beam groups diversity penalty candidates (m) beam-size (k) 10 3 2.0 6 3 11 Vicuna (Ravens-Blocks), 3 (VirtualHome) 4 for Flan-T5 (BabyAI) 8 for Flan-T5 (Baby-AI) Table 6: Inference hyperparameters. Here the candidates (m) and the beam-size (k) parameter are over actions. The rest of the beam search parameters are over tokens. # 7.5 Limitations and Future Work The main limitations are (i) the need for expert tra- jectories to train domain models, and (ii) the domain modelsâ limited adaptability to OOD data. These challenges are inherent to deep learning models. However, recent advances in LLMs offer promising solutions. For example, newer studies have leveraged LLMs for reward design due to their ability to infer intentions from minimal prompts (Kwon et al. 2023). Notably, LLMs outperform smaller counterparts like Bert in generalization. Since both Can and Pay scores resemble rewards, future studies could use LLMs to mitigate training and improve generaliza- tion. Another potential direction could be to experi- ment with symbolic methods and non-parameterized heuristics like comparing the current generated plan with the successful/expert trajectories in the buffer. # 8 Conclusion Ravens-Hanoi Ravens-Blocks BabyAI VirtualHome Score SayCan SayCanPay SayCan SayCanPay SayCan SayCanPay SayCan SayCanPay LM Perfect 48 50 52 54 81 88 49 52 88 92 70 75 90 92 60 64 Table 7: The table depicts the impact of the Say model on planning success performance.
2308.12682#30
2308.12682#32
2308.12682
[ "2302.13971" ]
2308.12682#32
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
In this context, both â LMâ and â Perfectâ represent Say models. â LMâ corresponds to the Vicuna model, while â Perfect Sayâ is an oracle Say model that consistently proposes the correct action along with two other distractor actions as next candidates. For all experiments, we used the Greedy-Action decoding strategy. We proposed to combine the world knowledge and generative capabilities of LLMs with the systematic- ity of classical planning by formulating a heuristic search-based planning framework for LLMs. We demonstrated how to generate plans that are both feasible and cost- effective. While LLMs still cannot generate long-horizon plans on par with classical planners, our method overcomes issues inherent to LLM-based planning and extends traditional planning with the advantages of language models, mark- ing significant progress for planning research with LLMs.
2308.12682#31
2308.12682#33
2308.12682
[ "2302.13971" ]
2308.12682#33
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
# Acknowledgement This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and is also part of the EU H2020 ICT48 project â TAILORâ under contract 952215, and the KU Leuven Research Fund (C14/18/062). References Ahn, M.; Brohan, A.; Brown, N.; Chebotar, Y.; Cortes, O.; David, B.; Finn, C.; Fu, C.; Gopalakrishnan, K.; Hausman, K.; Herzog, A.; Ho, D.; Hsu, J.; Ibarz, J.; Ichter, B.; Irpan, A.; Jang, E.; Ruano, R. J.; Jeffrey, K.; Jesmonth, S.; Joshi, N. J.; Julian, R.; Kalashnikov, D.; Kuang, Y.; Lee, K.-H.; Levine, S.; Lu, Y.; Luu, L.; Parada, C.; Pastor, P.; Quiambao, J.; Rao, K.; Rettinghouse, J.; Reyes, D.; Sermanet, P.; Sievers, N.; Tan, C.; Toshev, A.; Vanhoucke, V.; Xia, F.; Xiao, T.; Xu, P.; Xu, S.; Yan, M.; and Zeng, A. 2022.
2308.12682#32
2308.12682#34
2308.12682
[ "2302.13971" ]
2308.12682#34
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. arXiv:2204.01691. Bonet, B.; and Geffner, H. 2001. Planning as heuristic search. Artificial Intelligence, 129(1-2): 5â 33. Brohan, A.; Brown, N.; Carbajal, J.; Chebotar, Y.; Chen, X.; Choromanski, K.; Ding, T.; Driess, D.; Dubey, A.; Finn, C.; Florence, P.; Fu, C.; Arenas, M. G.; Gopalakrishnan, K.; Han, K.; Hausman, K.; Herzog, A.; Hsu, J.; Ichter, B.; Irpan, A.; Joshi, N.; Julian, R.; Kalashnikov, D.; Kuang, Y.; Leal, I.; Lee, L.; Lee, T.-W. E.; Levine, S.; Lu, Y.; Michalewski, H.; Mordatch, I.; Pertsch, K.; Rao, K.; Reymann, K.; Ryoo, M.; Salazar, G.; Sanketi, P.; Sermanet, P.; Singh, J.; Singh, A.; Soricut, R.; Tran, H.; Vanhoucke, V.; Vuong, Q.; Wahid, A.; Welker, S.; Wohlhart, P.; Wu, J.; Xia, F.; Xiao, T.; Xu, P.; Xu, S.; Yu, T.; and Zitkovich, B. 2023. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control. arXiv:2307.15818. Chevalier-Boisvert, M.; Bahdanau, D.; Lahlou, S.; Willems, L.; Saharia, C.; Nguyen, T. H.; and Bengio, Y. 2019. BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop. In International Conference on Learning Representations, volume 105. Chiang, W.-L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J.
2308.12682#33
2308.12682#35
2308.12682
[ "2302.13971" ]
2308.12682#35
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
E.; Stoica, I.; and Xing, E. P. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. Chung, H. W.; Hou, L.; Longpre, S.; Zoph, B.; Tay, Y.; Fedus, W.; Li, Y.; Wang, X.; Dehghani, M.; Brahma, S.; Webson, A.; Gu, S. S.; Dai, Z.; Suzgun, M.; Chen, X.; Chowdhery, A.; Castro-Ros, A.; Pellat, M.; Robinson, K.; Valter, D.; Narang, S.; Mishra, G.; Yu, A.; Zhao, V.; Huang, Y.; Dai, A.; Yu, H.; Petrov, S.; Chi, E. H.; Dean, J.; Devlin, J.; Roberts, A.; Zhou, D.; Le, Q. V.; and Wei, J. 2022. Scaling Instruction-Finetuned Language Models. arXiv:2210.11416. Dettmers, T.; Lewis, M.; Belkada, Y.; and Zettlemoyer, L. 2022. LLM.int8(): 8-bit Matrix Multiplication for Trans- formers at Scale. arXiv:2208.07339. Dettmers, T.; Pagnoni, A.; Holtzman, A.; and Zettlemoyer, L. 2023. QLoRA: Efficient Finetuning of Quantized LLMs. arXiv:2305.14314. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019.
2308.12682#34
2308.12682#36
2308.12682
[ "2302.13971" ]
2308.12682#36
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171â 4186. Minneapolis, Minnesota: Association for Computational Linguistics. Ding, Y.; Zhang, X.; Amiri, S.; Cao, N.; Yang, H.; Kaminski, A.; Esselink, C.; and Zhang, S. 2023. Integrating action knowledge and LLMs for task planning and situation handling in open worlds. Autonomous Robots, 47(8): 981â 997. Du, Y.; Liu, Z.; Li, J.; and Zhao, W. X. 2022. A Survey of Vision-Language Pre-Trained Models. arXiv:2202.10936. Frantar, E.; Ashkboos, S.; Hoefler, T.; and Alistarh, D. 2023. GPTQ: Accurate Post-Training Quantization for Genera- tive Pre-trained Transformers. arXiv:2210.17323. Golowich, N.; Moitra, A.; and Rohatgi, D. 2022. Planning in Observable POMDPs in Quasipolynomial Time. arXiv:2201.04735. Hao, S.; Gu, Y.; Ma, H.; Hong, J. J.; Wang, Z.; Wang, D. Z.; and Hu, Z. 2023. Reasoning with Language Model is Planning with World Model. arXiv:2305.14992. Helmert, M. 2006. The fast downward planning system. Journal of Artificial Intelligence Research, 26: 191â 246. Huang, W.; Abbeel, P.; Pathak, D.; and Mordatch, I. 2022a. Language models as zero-shot planners: Extracting action- able knowledge for embodied agents. In International Conference on Machine Learning, 9118â 9147. PMLR.
2308.12682#35
2308.12682#37
2308.12682
[ "2302.13971" ]
2308.12682#37
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Huang, W.; Xia, F.; Shah, D.; Driess, D.; Zeng, A.; Lu, Y.; Florence, P.; Mordatch, I.; Levine, S.; Hausman, K.; and Ichter, B. 2023. Grounded Decoding: Guiding Text Generation with Grounded Models for Embodied Agents. arXiv:2303.00855. Huang, W.; Xia, F.; Xiao, T.; Chan, H.; Liang, J.; Florence, P.; Zeng, A.; Tompson, J.; Mordatch, I.; Chebotar, Y.; Sermanet, P.; Brown, N.; Jackson, T.; Luu, L.; Levine, S.; Hausman, K.; and Ichter, B. 2022b.
2308.12682#36
2308.12682#38
2308.12682
[ "2302.13971" ]
2308.12682#38
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Inner Monologue: Embodied Reasoning through Planning with Language Models. arXiv:2207.05608. Kaelbling, L. P.; Littman, M. L.; and Cassandra, A. R. 1998. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2): 99â 134. Kwon, M.; Xie, S. M.; Bullard, K.; and Sadigh, D. 2023. Reward Design with Language Models. In The Eleventh International Conference on Learning Representations. Lakhotia, K.; Kharitonov, E.; Hsu, W.-N.; Adi, Y.; Polyak, A.; Bolte, B.; Nguyen, T.-A.; Copet, J.; Baevski, A.; Mo- hamed, A.; and Dupoux, E. 2021. On Generative Spoken Language Modeling from Raw Audio. Transactions of the Association for Computational Linguistics, 9: 1336â
2308.12682#37
2308.12682#39
2308.12682
[ "2302.13971" ]
2308.12682#39
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
1354. Liang, J.; Huang, W.; Xia, F.; Xu, P.; Hausman, K.; Ichter, B.; Florence, P.; and Zeng, A. 2023. Code as Policies: Language Model Programs for Embodied Control. arXiv:2209.07753. Liao, Y.-H.; Puig, X.; Boben, M.; Torralba, A.; and Fidler, S. 2019. Synthesizing Environment-Aware Activities via Activity Sketches. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6284â 6292. Lin, K.; Agia, C.; Migimatsu, T.; Pavone, M.; and Bohg, J. 2023. Text2Motion: from natural language instructions to feasible plans. Autonomous Robots, 47(8): 1345â
2308.12682#38
2308.12682#40
2308.12682
[ "2302.13971" ]
2308.12682#40
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
1365. Liu, B.; Jiang, Y.; Zhang, X.; Liu, Q.; Zhang, S.; Biswas, J.; and Stone, P. 2023. LLM+P: Empowering Large Language Models with Optimal Planning Proficiency. arXiv:2304.11477. Pallagani, V.; Muppasani, B.; Murugesan, K.; Rossi, F.; Horesh, L.; Srivastava, B.; Fabiano, F.; and Loreggia, A. 2022. Plansformer: Generating Symbolic Plans using Transformers. arXiv:2212.08681. Puig, X.; Ra, K.; Boben, M.; Li, J.; Wang, T.; Fidler, S.; and Torralba, A. 2018. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8494â 8502. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020.
2308.12682#39
2308.12682#41
2308.12682
[ "2302.13971" ]
2308.12682#41
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485â 5551. Silver, T.; Hariprasad, V.; Shuttleworth, R. S.; Kumar, N.; Lozano-P´erez, T.; and Kaelbling, L. P. 2022. PDDL Planning with Pretrained Large Language Models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. Singh, I.; Blukis, V.; Mousavian, A.; Goyal, A.; Xu, D.; Tremblay, J.; Fox, D.; Thomason, J.; and Garg, A. 2023.
2308.12682#40
2308.12682#42
2308.12682
[ "2302.13971" ]
2308.12682#42
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models. In International Conference on Robotics and Automation (ICRA). Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; Rodriguez, A.; Joulin, A.; Grave, E.; and Lample, G. 2023. LLaMA: Open and Efficient Foundation Lan- guage Models. arXiv:2302.13971. Valmeekam, K.; Olmo, A.; Sreedharan, S.; and Kambhampati, S. 2022. Large Language Models Still Canâ t Plan (A Benchmark for LLMs on Planning and Reasoning about Change). In NeurIPS 2022 Foundation Models for Decision Making Workshop. Valmeekam, K.; Sreedharan, S.; Marquez, M.; Olmo, A.; and Kambhampati, S. 2023. On the Planning Abilities of Large Language Models (A Critical Investigation with a Proposed Benchmark). arXiv:2302.06706. van den Oord, A.; Li, Y.; and Vinyals, O. 2019. Representation Learning with Contrastive Predictive Coding. arXiv:1807.03748. Wang, Y.; Wang, W.; Joty, S.; and Hoi, S. C. 2021. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder In Moens, M.-F.; Huang, X.; Specia, L.; and Yih, S. W.-t., eds., Models for Code Understanding and Generation. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 8696â 8708.
2308.12682#41
2308.12682#43
2308.12682
[ "2302.13971" ]
2308.12682#43
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Online and Punta Cana, Dominican Republic: Association for Computational Linguistics. Xie, Y.; Yu, C.; Zhu, T.; Bai, J.; Gong, Z.; and Soh, H. 2023. Translating Natural Language to Planning Goals with Large-Language Models. arXiv:2302.05128. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023.
2308.12682#42
2308.12682#44
2308.12682
[ "2302.13971" ]
2308.12682#44
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arXiv:2305.10601. Zeng, A.; Florence, P.; Tompson, J.; Welker, S.; Chien, J.; Attarian, M.; Armstrong, T.; Krasin, I.; Duong, D.; Sind- hwani, V.; and Lee, J. 2021. Transporter Networks: Rearranging the Visual World for Robotic Manipulation. In Proceedings of the 2020 Conference on Robot Learning, volume 155 of Proceedings of Machine Learning Research, 726â 747. PMLR. Ziegler, D. M.; Stiennon, N.; Wu, J.; Brown, T. B.; Radford, A.; Amodei, D.; Christiano, P.; and Irving, G. 2020. Fine-Tuning Language Models from Human Preferences. arXiv:1909.08593.
2308.12682#43
2308.12682
[ "2302.13971" ]
2308.12519#0
Rational Decision-Making Agent with Internalized Utility Judgment
4 2 0 2 n a J 7 1 ] L C . s c [ 2 v 9 1 5 2 1 . 8 0 3 2 : v i X r a Preprint # RATIONAL DECISION-MAKING AGENT WITH INTER- NALIZED UTILITY JUDGMENT Yining Ye1â , Xin Cong1â â , Shizuo Tian1, Yujia Qin1, Chong Liu1, Yankai Lin2, Zhiyuan Liu1â , Maosong Sun1 1Tsinghua University 2Renmin University of China [email protected], [email protected] # ABSTRACT
2308.12519#1
2308.12519
[ "2305.14318" ]
2308.12519#1
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of execut- ing intricate multi-step decision-making tasks beyond traditional NLP applica- tions. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision- making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge deci- sions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RADAGENT (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual deci- sion steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Exper- imental results on the ToolBench dataset demonstrate RADAGENTâ s superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlight- ing its effectiveness and efficiency. # INTRODUCTION Agent (Searle, 1969; Wooldridge & Jennings, 1995; Maes, 1994; Hendler, 1999), as the long- standing pursuit of artificial intelligence (AI), is expected to possess the ability to plan, make decisions, and take actions to accomplish complex tasks autonomously. As large language mod- els (LLMs) have undergone rapid development, showcasing remarkable capabilities (OpenAI, 2022; 2023), many efforts have devoted to develop LLM-based agent (Richards, 2023; Nakajima, 2023; age, 2023) to accomplish intricate multi-step decision-making tasks (Yao et al., 2022; Hao et al., 2023a; Yao et al., 2023; Qin et al., 2023c) beyond traditional natural language language (NLP) ap- plications.
2308.12519#0
2308.12519#2
2308.12519
[ "2305.14318" ]
2308.12519#2
Rational Decision-Making Agent with Internalized Utility Judgment
Even with these strides, existed LLM-based agent requires manually-designed external performance measure to guide the decision-making process. For instance, in Game of 24 which uses four numbers and basic arithmetic operations to obtain 24, a value prompt (Yao et al., 2023) is heuristically designed to assess the potential of each decision to reach 24, and then choose cor- rect decisions accordingly. The reliance on the external performance metrics as prior restricts the adaptability in real-world scenarios as such prior may be unavailable, flawed, or even erroneous. When making decisions, human not only draw upon the external measure but also resort to the individual rationality formed in practice from the posterior experience. The rationality is modeled as an internal utility judgment ability which owns two principal properties (Kahneman & Tversky,
2308.12519#1
2308.12519#3
2308.12519
[ "2305.14318" ]
2308.12519#3
Rational Decision-Making Agent with Internalized Utility Judgment
â Indicates equal contribution. â Corresponding author. 1 Preprint 2000; Arrow, 1959; Plott, 1973): (1) Completeness: Given any two choices A and B, an individual strictly must prefer one of them (A â ¥ B or B â ¥ A). (2) Transitivity: If an individual prefers A to B (A â ¥ B), and prefers B to C (B â ¥ C), then the individual must be prefer A to C (A â ¥ B â ¥ C).
2308.12519#2
2308.12519#4
2308.12519
[ "2305.14318" ]
2308.12519#4
Rational Decision-Making Agent with Internalized Utility Judgment
Based on these two properties of the utility judgment, given a set of choices, human can judge their utilities and choose the one with the highest utility to achieve the best outcome. To this end, we propose RADAGENT (Rational Decision-Making Agent) which internalizes the utility judgment ability to achieve rationality for the agent. In RADAGENT, the internalized utility judgment is constructed based on an iterative framework: (1) Experience Exploration: Due to the complexity of real-world tasks, the solution space may be infinite and it is challenging to find the optimal solution efficiently. The agent should explore potential decisions to find better solutions as more as possible for the utility learning. (2) Utility Learning: Given a series of solutions, the agent should make comparisons between them to judge their assesses. To learn a quantitative utility, we further design Elo-based Utility Construction which assigns each decision with an Elo score to represent its utility as the quantitative judgment through a series of pairwise comparisons between any two solutions. After multiple comparisons, the Elo score converges to an accurate value that represents its actual utility to achieve the task. Through the iterative utility judgment construction, RADAGENT can judge the best solution with the best outcome. To validate the effectiveness of our proposed approach, we implement RADAGENT with Chat- GPT (OpenAI, 2022) and conduct extensive experiments on ToolBench dataset (Qin et al., 2023c), which contains intricate multi-step decision tasks involving diverse scenarios. Experimental results unequivocally demonstrate the superiority of our approach against several baselines by achieving over 10% improvements in Pass Rate to accomplish complex tasks. Moreover, extensive analyses show that our approach not only delivers superior solutions with higher quality but also achieves greater efficiency by reducing the number of ChatGPT API calls. Our contributions are threefold:
2308.12519#3
2308.12519#5
2308.12519
[ "2305.14318" ]
2308.12519#5
Rational Decision-Making Agent with Internalized Utility Judgment
â ¢ We propose RADAGENT, a rational decision-making agent that can construct its internal ratio- nality to accomplish diverse real-world tasks, not relying on external performance measure. â ¢ We devise Elo-based Utility Construction which can internalize the utility judgment for the agent by learning Elo scores for each decision, leading to the optimal solution. â ¢ Extensive experiments on the ToolBench dataset demonstrate the effectiveness and efficiency of our proposed method against representative methods, marking a significant step toward unleash- ing the autonomous decision-making capability of LLMs. # 2 PRELIMINARY Elo Rating System The Elo rating system (Elo, 1967), commonly used in competitive contexts offers a numerical estimation of the skill levels of players. It represents the skill levels of players by Elo scores and assesses the Elo scores through a series of one-to-one competitions. It assumes that each playerâ s performance follows a Gaussian distribution (x â ¼ N (µ, Ï )) and each comparison of two players is actually comparing between two samples from their Gaussian distributions. Through multiple comparisons, we can approximate their true skill levels by estimating their Elo scores. Given two players x and y, their Elo scores are denoted as vx and vy, respectively. The expected superiority of x against y is calculated as: Ex>y = 1 1 + eâ vxâ vy r (1) where r is the Elo coefficient. Next, we run a competition between them to find the actual winner. We denote the competition result as Rx>y: Rx>y = 1, if x win, 0, if y win, 0.5, otherwise (2) 2 Preprint
2308.12519#4
2308.12519#6
2308.12519
[ "2305.14318" ]
2308.12519#6
Rational Decision-Making Agent with Internalized Utility Judgment
We then update their Elo score accordingly: vx = vx + K â (Rx>y â Ex>y) vy = vy + K â (Ry>x â Ey>x) (3) where K > 0 is a hyper-parameter to control the update step. # 3 TASK FORMULATION We formulate the decision-making process within LLMs as a Markov decision process (MDP). Given a human instruction Q, LLMs are tasked with generating a decision sequence t = {s0, a1, s1, · · · , sN } to accomplish Q. Here, {si}N i=0 represents the decision states, s0 is the ini- tial state, sN is the final state which means that LLMs obtain enough information to give the final response to humans, and {ai}T i=1 denotes the actions taken by LLMs during the decision-making process. At each step in the MDP framework, LLMs decide to take action ai â ¼ P (ai|si) based on the current state and subsequently arrive at the next state si+1 â ¼ P (si+1|ai, si). Thus, we denote a decision step as di+1 = (si, ai, si+1). To make sequential decisions toward accomplishing Q autonomously, LLMs need to identify the utility of each decision step and select the most valuable ones to further explore. In this procedure, judgment acts as an important role in quantitatively as- sessing the value vi+1 = V (di+1) for each decision step di+1. Equipped with the value judgment, LLMs can select those decision steps with a higher value that holds the promise of yielding the most promising outcomes, ultimately leading to the derivation of the final decision sequence that fulfills the requirements of Q. # 4 METHODOLOGY Our RADAGENT aims to find the decision sequence with the highest utility to accomplish complex instructions autonomously. It contains two principal phases to internalized the utility judgment:
2308.12519#5
2308.12519#7
2308.12519
[ "2305.14318" ]
2308.12519#7
Rational Decision-Making Agent with Internalized Utility Judgment
â ¢ Experience Exploration: The agent takes actions sequentially to form a decision sequence toward # a feasible solution. â ¢ Utility Learning: The agent makes judgments among decision sequences to assess the utility (i.e., Elo scores) of existing decision steps. These two phases work in an iterative fashion, reinforcing one anotherâ s outcomes (see in Figure 1). In experience exploration phase, the agent explore more potential decision sequences which can promote to judge the utility of each decision step. In utility learning phase, the Elo score of each decision step serves as a dynamic guide, steering subsequent experience exploration toward more promising and superior solutions. By iteratively cycling through these intertwined phases, the agent progressively evolves toward an optimal decision sequence with the highest utility to address in- structions. 4.1 EXPERIENCE EXPLORATION In RADAGENT, each experience exploration benefits from the previous exploration history based on Elo-based Utility Construction (§ 4.2). When exploring a new decision sequence, LLMs will select a decision step with a higher Elo score to explore further. Specifically, in RADAGENT, each decision step is assigned an Elo score explicitly. A decision step with higher Elo scores means that it is more likely to accomplish the instruction and thus Elo scores are used to guide the decision exploration process. Given an intermediate decision step d, its subsequent decision steps are denoted as {d1, d2, · · · , dn}. Given their learned Elo scores {vi}n i=1, the probability of choosing to explore can be modified as follows: exp() Ys exp)â P(d;) = d; â ¬ {dy,dz,+++ ,dn} (4) where Ï refers to the temperature.
2308.12519#6
2308.12519#8
2308.12519
[ "2305.14318" ]
2308.12519#8
Rational Decision-Making Agent with Internalized Utility Judgment
Note that only explore the known decisions may cause local optimal solution. Therefore, we define a rejection decision step Ë d with an initial Elo score Ë v to 3 Preprint Experience Exploration not good. need toexplore & Internalized Utility Judgment Figure 1: Illustration of the iterative Experience Exploration and Utility Learning phase to derive the final optimal solution. represent that â The agent decides to explore a new decisionâ . We add this rejection decision step into the subsequent decision steps as {d1, d2, · · · , dn, Ë d} when selecting: exp(â + P(d;) = Ce) , d; â ¬ {d,dz,-+- ,dn, a} (5) x exp(3) The complete experience exploration process begins from the initial state s0 and chooses the sub- sequent decision steps iteratively based on Equation 5 in a top-down manner. When it chooses the rejection decision step Ë d, the agent will generate a new decision sequence starting from the current intermediate step d. In the iterative experience exploration process, those potential decision steps will be explored thoroughly, until finding the optimal solution. 4.2 UTILITY LEARNING As external performance measure may be unavailable, flawed, or even erroneous, the agent should resort to their internalized utility judgment ability to solve diverse tasks. To this end, we design an Elo-based Utility Construction, equipping the agent with the Elo rating system to provide a numerical utility to each decision step to guide the decision-making process. The utility learning process (i.e., Elo score estimation process) is conducted in a bottom-up manner. It first adjusts the Elo scores of the final decision steps of each decision sequence via pairwise comparison and then updates the Elo scores of the intermediate decision steps gradually. Once a new decision sequence is generated in the experience exploration phase, the agent will self-judge the Elo scores of existing decision steps via pairwise comparison. Given the newly generated decision sequence tn, we first assign all decision steps of tn with an initial Elo score. Then, we randomly select a decision sequence ti from existing decision sequences T = {t1, t2, · · · , tnâ 1} and use LLMs to compare tn with ti to judge which one has the superior performance.
2308.12519#7
2308.12519#9
2308.12519
[ "2305.14318" ]
2308.12519#9
Rational Decision-Making Agent with Internalized Utility Judgment
Since the LLM- based comparison is sensitive to the candidate order (Qin et al., 2023d; Chiang & Lee, 2023; Wang et al., 2023), we conduct comparisons twice with different orders. Rtn>ti = 1, if tn win twice, 0, if ti win twice, 0.5, otherwise (6) Getting the comparison result, we update the Elo scores of the final decision steps of tn and ti based on Equation 3. Next, we calculate the Elo scores of intermediate decision steps based on their
2308.12519#8
2308.12519#10
2308.12519
[ "2305.14318" ]
2308.12519#10
Rational Decision-Making Agent with Internalized Utility Judgment
4 # Preprint subsequent decision steps. Specifically, given an intermediate decision step di, we calculate its Elo scores as follows: (αj â vj), vi = dj â Child(di) (7) where Child(di) refers to the set of the subsequent decision steps of di, αj = exp(vj /Ï ) k exp(vk/Ï ) is the normalized weight and Ï is from Equation 5. By repeating the comparison via randomly sampling decision sequences, the Elo score of each decision step will converge to its expected value. When guiding the experience exploration process, the Elo score of a decision step with a few number of Elo update may not represent its real value accurately. Such a decision step cannot be fully trusted for exhaustive exploration.
2308.12519#9
2308.12519#11
2308.12519
[ "2305.14318" ]
2308.12519#11
Rational Decision-Making Agent with Internalized Utility Judgment
Hence, we adjust the temperature Ï in Equation 5 based on the number of the Elo update. Let Md be the number of the Elo update of the decision step d. The temperature of d is annealing as follows: (8) 1 Ta = T *& â â â â â â â â 14+ /In(Ma + 1) where Ï 0 is the default temperature. With the growth of the number of Elo update, the approximated Elo score converges to its real value. At this time, we tend to explore the most possible decision. 4.3 DISCUSSION After conducting adequate experience exploration and utility learning process, the agent will con- struct the internalized utility judgment. As all decision steps are estimated their utilities as Elo scores, any two of them can be compared, i.e., satisfying the Completeness property. Given three decision steps A, B, C, if vA > vB and vB > vC, the Elo score of A must be larger than C (vA > vB > vC), i.e., satisfying Transitivity property. Thus, the rationality is internalized in the agent so that it can rationally assess all decision sequences and select the best-performed one as the final solution To derive the best outcome, given all existing decision sequences T = {t1, t2, · · · , tn}, the one which final decision with the largest utility is selected as the final solution. t = arg max tâ T {V (dN )} (9) where dN refers to the final decision step. # 5 EXPERIMENT As the key contribution of this work is to develop an rational decision-making agent with internalized utility judgment, we aim to answer the following research questions through a series of experiments. RQ1 Can RADAGENT make decisions rationally to accomplish a diverse set of tasks? RQ2 Beyond finding feasible solutions, can RADAGENT find better solution? RQ3 How efficient is RADAGENT in decision making? RQ4 Is Elo-based Utility Construction effective in providing reliable utility assessments? RQ5 What are the key differentiating factors of RADAGENT against other methods? Next, we describe the experimental settings and then report results by answering the aforementioned research questions. 5.1 EXPERIMENTAL SETTINGS
2308.12519#10
2308.12519#12
2308.12519
[ "2305.14318" ]
2308.12519#12
Rational Decision-Making Agent with Internalized Utility Judgment
Datasets We conduct extensive experiments on ToolBench dataset (Qin et al., 2023c), compris- ing a diverse and intricate collection of human instructions necessitating agents to make multi-step In our experiments, we focused on the intra-category decisions for successful task completion. multi-tool instruction scenario. This subset of ToolBench has been thoughtfully curated to reflect the complexity of real-world tasks, encompassing the utilization of various tools and necessitating multi-step decision-making processes. It is a rigorous evaluation to demonstrate the robustness and generalizability of decision making across diverse tasks.
2308.12519#11
2308.12519#13
2308.12519
[ "2305.14318" ]
2308.12519#13
Rational Decision-Making Agent with Internalized Utility Judgment
5 # Preprint Given the resource-intensive nature of API calls, we conducted our experiments on a random se- lection of 500 samples from the total pool of 25K human instructions available in ToolBench. This sampling strategy allows us to achieve a representative evaluation while managing computational costs effectively. Baselines We compare RADAGENT with the following decision-making methods: â ¢ CoT (Wei et al., 2023; Yao et al., 2022) decomposes reasoning into explicit intermediate steps. We adapt ReACT (Yao et al., 2022) to decompose a decision step in the format â Thought: ..., API Name: ..., Parameters: â
2308.12519#12
2308.12519#14
2308.12519
[ "2305.14318" ]
2308.12519#14
Rational Decision-Making Agent with Internalized Utility Judgment
¢ CoT@3 extends the CoT approach by running the decision-making process three times indepen- dently for an instruction and finally generates a total of three decision sequences. â ¢ Reflexion (Shinn et al., 2023) builds upon CoT@3 and allows LLMs to engage in self-reflection on their previous decision sequences. The reflection summary is concatenated in the prompt before proceeding to the next decision. â ¢ BFS (Yao et al., 2023) constructs a decision tree in a top-down manner to search for a feasible solution. Different from the original version, we do not introduce any task-specific knowledge in the tree search process. Since the number of API call increase exponentially with the increasing depth of the decision tree, we limit the search breadth of each state as 2 and each level only keeps 3 decision states with the highest performance based on ToolEval comparison (see § 5.1). Finally, BFS will provide 3 decision sequences for an instruction. DFS (Yao et al., 2023) constructs a decision tree by going as deep as possible along each branch and exploring the most recently visited states. As BFS, no task-specific knowledge is introduced in the tree search process. The search process is terminated after deriving 3 decision sequences. â ¢ DFSDT (Qin et al., 2023c) is an improved version of DFS, which allows LLMs to dynamically assess different decision states and choose to either proceed along a promising path or abandon an existing state and expand another one. As DFS, the decision search process of DFSDT is ended after generating 3 decision sequences. Evaluation Metrics To ensure a rigorous and accurate evaluation of the performance of our pro- posed decision-making approach, we adopt two evaluation metrics prescribed by ToolBench:
2308.12519#13
2308.12519#15
2308.12519
[ "2305.14318" ]
2308.12519#15
Rational Decision-Making Agent with Internalized Utility Judgment
â ¢ Pass Rate (Qin et al., 2023c) assesses the ability of LLMs to successfully accomplish complex real-world tasks. It calculates the proportion of instructions that an LLM can complete within a pre-defined number of decision steps. â ¢ Preference Rank measures the quality of the decision sequences generated by the LLMs. This evaluation involves comparing the decision sequences produced by different methods for a given instruction, based on ToolEval tool (Qin et al., 2023c) to enable a fair comparison. Subsequently, we utilize PRP (Qin et al., 2023d) to rank all decision sequences. To ensure robustness, we perform the ranking process 10 times with different random seeds and report the average rank for each method. As CoT@3, Reflexion, BFS, DFS, DFSDT will provide three decision sequences in the end, we consider a user instruction accomplished successfully if any of the three decision sequences lead to the â Finishâ call with a final answer.
2308.12519#14
2308.12519#16
2308.12519
[ "2305.14318" ]
2308.12519#16
Rational Decision-Making Agent with Internalized Utility Judgment
For Preference Rank metrics, we report the average rank of the best decision sequences generated by these methods. Implementation Details We build upon ChatGPT 1, a prominent large language model, to imple- ment our approach. Our approach involves conducting a decision-exploration process 20 times and finally selecting the decision sequence with the highest Elo score as the final decision. For Elo-based Utility Construction, the initial Elo score of the decision step is set as 0.0 and the Elo coefficient r is set as 173.72 according to the vanilla Elo rating system (Elo, 1967). The Elo score of Ë d in Equation 5 is set as 0.0. K in Equation 3 is set as 50. To manage the computational cost of ChatGPT API calls, we set a limit of 100 ChatGPT API calls for a decision-searching process. Furthermore, we impose a maximum limit of 12 steps for each decision sequence due to the cost of ChatGPT API calls. 1gpt-3.5-turbo-0613-16k 6 Preprint Model Pass Rate (%) CoT CoT@3 Reflexion BFS DFS DFSDT 16.60 31.20 26.60 38.00 45.58 50.20 RADAGENT 61.92 Model Pref. Rank CoT@3 Reflexion BFS DFSDT RADAGENT 3.45 3.48 3.25 2.91 -Rand. Select -Elo Select 3.24 2.19 Table 1: Main experimental results on ToolBench dataset. Bold marks the best performance. __ Table 2: Solution ranking experimen- tal results on ToolBench dataset. Bold marks the top rank. 5.2 OVERALL RESULTS (RQ1) To validate the effectiveness of our proposed RADAGENT approach, we first study whether our approach can accomplish more complex tasks. The results are shown in Table 1, from which we can observe that: (1) CoT only solves 16.60% instructions when facing complex tasks. That is because CoT only explores one decision sequence, leading to inadequate exploration of the whole solution space. Especially, a failure of API call may impact the following decisions, causing the model to be trapped in a faulty loop.
2308.12519#15
2308.12519#17
2308.12519
[ "2305.14318" ]
2308.12519#17
Rational Decision-Making Agent with Internalized Utility Judgment
CoT@3 exhibited a 14.6% gain over CoT, indicating that the increasing number of decision explorations is more likely to reach a feasible solution. (2) Com- pared with CoT@3, Reflexion, despite introducing self-reflection on decision making, does not yield any improvement and even results in inferior performance. This outcome suggests that, when faced with complex instructions, mere self-reflection may not be sufficient to provide informative guidance for LLMs to search for a feasible solution. (3) All tree-based methods (BFS, DFS and DFSDT) yield lower Pass Rate than RADAGENT, which indicates that without task-specific expert knowledge, the tree-based methods cannot work effectively to accomplish diverse tasks. (4) RADA- GENT achieves superior performance against all baselines. Compared with the best baseline method, DFSDT, RADAGENT exhibits a substantial 10% improvement in Pass Rate. Such a significant im- provement is attributed to the capability of RADAGENT to autonomously make decisions by itself to accomplish the complex instructions via self-judgment. 5.3 SOLUTION RANKING (RQ2) In addition to validating the effectiveness of our approach to reach feasible solutions, we seek to investigate whether RADAGENT can further provide solutions with higher quality. We first develop a variant of our model named RADAGENT -Rand. Select which selects the final decision sequence randomly while RADAGENT -Elo Select selects based on the highest Elo score. We then select representative baselines (CoT@3, Reflexion, BFS, DFS, DFSDT) and conduct a comprehensive comparison of the decision sequences produced by each method. To assess the quality of the de- cisions, we employed the Preference Rank metric based on ToolEval algorithm (Qin et al., 2023c), which offers a reliable measure of the superiority of decision sequences. The experimental results are summarized in Table 2, and it reveals that RADAGENT consistently achieves the top average rank among all comparable baselines. Especially, RADAGENT -Elo Select obviously outperforms RADAGENT -Rand. Select, confirming the capability of our Elo-based Utility Construction to assess each decision sequence to select superior solutions, resulting in high-quality decision making. 5.4 EFFICIENCY ANALYSIS (RQ3) We further conducted the analyses to evaluate the efficiency of our proposed RADAGENT.
2308.12519#16
2308.12519#18
2308.12519
[ "2305.14318" ]
2308.12519#18
Rational Decision-Making Agent with Internalized Utility Judgment
Since all methods rely on ChatGPT API calls, the inefficient decision-making method would involve more API calls, causing costly expenses. We thus conducted experiments with varying ChatGPT API call limitations, ranging from 30 to 300, and measured Pass Rate of each method under these varied limitations. The experimental results are demonstrated in Figure 2. These results showcase that the tree-based baselines (BFS, DFS, DFSDT) heavily rely on a large number of ChatGPT API call to achieve high Pass Rate. Once limiting the number of API calls, their performance even cannot
2308.12519#17
2308.12519#19
2308.12519
[ "2305.14318" ]
2308.12519#19
Rational Decision-Making Agent with Internalized Utility Judgment
7 # Preprint 100 a= RADAGENT -g- DESDT Fs obs cor so f 60 Pass Rate 40-4 20-4 0 30 GO 90 120 150 180 210 240 270 300 330 Limitation of API Call 1 08 4 r 0.64 ia = 044 L 0 + + + 7 + + + + + 0 01 02 03 04 05 06 O87 O08 09 1 Normalized Elo Score Figure 2: Efficiency experimental results on various API cal limitations. Figure 3: Performance on different data split with varied Elo scores. surpass CoT. In contrast, our approach achieves the highest Pass Rate under all limitation settings, especially in low-resource settings. We attribute it to that our method can utilize Elo scores to dy- namically select the promising decision steps to explore, avoiding those unpromising ones. Thus, our method illustrates superior efficiency against baselines and the practical advantages of our ap- proach in real-world scenarios. 5.5 RELIABLE UTILITY ASSESSMENT OF ELO SCORE (RQ4) To verify the effectiveness of our Elo-based Utility Construction in providing reliable utility assess- ments, we conducted a comprehensive analysis using the ToolBench dataset. As the Elo score serves as a metric to represent the utility of each decision, we seek to determine whether the Elo score is a reliable indicator of decision quality. To this end, we partitioned the ToolBench dataset into sev- eral subsets based on the Elo scores assigned to the decision sequences generated by RADAGENT. We first collect the Elo scores for all ToolBench data and then normalized them to scale within the range of 0 to 1. Next, we sort the normalized Elo scores and divided them into 10 intervals, getting 10 subsets of ToolBench data accordingly. Subsequently, we calculated the Pass Rate for each method on these 10 subsets. Figure 3 illustrates the experimental results. A discernible trend is observed across all methods: the Pass Rate consistently increases with higher Elo scores. This clear positive correlation between the Elo score and the Pass Rate demonstrates the efficacy of the Elo-based Utility Construction in providing reliable assessments of decision quality.
2308.12519#18
2308.12519#20
2308.12519
[ "2305.14318" ]
2308.12519#20
Rational Decision-Making Agent with Internalized Utility Judgment
A higher Elo score indicates that the decision sequence is more likely to represent an accomplished solution to the instruction, whereas a lower Elo score suggests that the instruction may be more challenging, and the corresponding decision sequence may not effectively solve the instruction. 5.6 ERROR ANALYSIS (RQ5) In this section, we present a comprehensive case analysis to elucidate the specific tasks that RADA- GENT effectively addresses. By dissecting the nature of RADAGENTâ s successes and failures, we shed light on its autonomous decision-making capabilities and limitations. Through this analysis, we provide deeper insights into the distinctive attributes of our proposed approach. We commence our analysis by categorizing the common reasons for failure encountered by various methods, employing an autonomous filtering technique. These reasons encompass: (1) Unavailable Tool: Occurrences where a subset of the designated tools is inaccessible, e.g., HTTP 404 or 500 error. (2) Tool Call Error: Instances of tool call errors, including issues related to parameter format mismatching and missing mandatory parameter fields. (3) Hallucinated Tool: Instances where the model employs tools not provided, i.e., invoking a non-existent tool. (4) Decision Failure: Instances where the model fails to accomplish although none of the aforementioned problems occur. We present the incidence ratio of the aforementioned categories together with the fix ratio that models successfully fix the occurred errors to accomplish the instructions. Note that these failure categories may coexist in an instruction. From Table 3, several noteworthy observations arise: (1) RADAGENT boasts the lowest incidence ratio of decision failure, highlighting its adeptness in decision making. (2) DFSDT and RADA- GENT exhibit relatively higher incidence ratios of hallucinated tools while RADAGENT surpasses 8 Preprint Method Hallucinated Tool Ratio Tool Call Error Fix Ratio Ratio Fix Ratio Unavailable Tool Decision Failure CoT@3 BFS DFSDT RADAGENT 14.2 18.8 31.5 42.1 25.4 25.5 38.9 53.3 41.2 50.8 62.5 62.3 14.8 31.1 41.0 54.0 2.0 2.6 3.0 3.0 52.5 48.6 26.4 14.8 Table 3: Incidence ratio and Fix ratio of Common Failure reasons in decision-making process.
2308.12519#19
2308.12519#21
2308.12519
[ "2305.14318" ]
2308.12519#21
Rational Decision-Making Agent with Internalized Utility Judgment
others in terms of the fix ratio, indicating its proficiency in rectifying this failure. (3) RADAGENT outperforms other methods significantly in fixing tool call errors, demonstrating the robustness of its self-judgment ability. (4) All methods own similar incident ratio of Tool Call Error which shows that there still exist some inoperative APIs in ToolBench, influencing the decision-making process. (5) Lastly, we examine cases that all methods fail. While certain cases remain unsolvable due to the ambiguity of user-provided values (e.g., user ID, email address) or restrictions imposed by limited tool chain lengths, a subset of challenges underscores the necessity for advanced decision-making proficiencies. Taking a step further, we synthesize the case analysis results to elucidate the multifaceted compe- tencies that a decision-making method necessitates.
2308.12519#20
2308.12519#22
2308.12519
[ "2305.14318" ]
2308.12519#22
Rational Decision-Making Agent with Internalized Utility Judgment
â ¢ Exception Handling. During the decision-making process, exceptions may occur (e.g., tool un- available, tool call errors), leading to the decision step cannot meet the expectation. Under these circumstances, decision-making methods should have the ability to deal with the exceptions to navigate to a new decision. CoT is susceptible to these scenarios, which leads the model into a loop of repeated erroneous decisions. In contrast, tree-based methods excel in mitigating such occurrences as they can explore potential decisions to avoid exceptions.
2308.12519#21
2308.12519#23
2308.12519
[ "2305.14318" ]
2308.12519#23
Rational Decision-Making Agent with Internalized Utility Judgment
â ¢ Diversity Exploration. To accomplish a task, there exist different exploration directions. For example, in tool use scenarios, some tools have analogous functionalities and one of them is the most functional to accomplish tasks. DFS and DFSDT, constrained by their relatively narrow search width, might miss identifying the optimal solution. Although BFS can make several deci- sions in a step, it fails to explore promising decisions as it cannot achieve good judgment of the value of each decision. In contrast, RADAGENT assigns lower scores to fewer potential decision steps, displaying a trend for exploring novel avenues. This exemplifies a scenario demanding diversity in exploration.
2308.12519#22
2308.12519#24
2308.12519
[ "2305.14318" ]
2308.12519#24
Rational Decision-Making Agent with Internalized Utility Judgment
â ¢ Decision Reflection. Complex tasks should be divided into sequential decisions and the model should accomplish them progressively to finish the task finally. It requires models to verify the completeness of each decision step and reflect to make better decisions toward successful directions accordingly. DFSDT cannot evaluate the intermediate decision so it cannot learn a good reflection from previous decisions to select an effective one. RADAGENT, benefitting from its self-judgment mechanism, assigns higher scores to decision steps aligned with comprehensive solution strategies. This innate ability to recognize the completeness of previous decisions and guide the next decision accordingly is a hallmark of an effective decision-making method. # 6 RELATED WORK Decision Making Methods for LLM-based Agents Efficient and effective decision-making abil- ity is fundamental for LLM-based agents to the attainment of specific objectives (Yao et al., 2022; 2023; Hao et al., 2023a; Besta et al., 2023; Sel et al., 2023). Although LLMs are pre-trained on a large-scale corpus which equips them with substantial common sense and knowledge to solve several problems, due to the complexity and diversity of realistic tasks, LLM-based agents still struggle to make multi-step decisions to solve realistic tasks. Recently, as Chain-of-Thought (Wei et al., 2023) demonstrates its capability to decompose complex questions into sequential intermediate steps, sev- eral LLM-based decision-making methods are proposed to enhance the decision-making ability of agents. ReACT (Yao et al., 2022) develops a variant of CoT to leverage the reasoning ability of LLMs in decision-making scenarios. Reflexion (Shinn et al., 2023) further offers a remedial ap- proach to make LLMs reflect their failure and summarize the reason in the decision process, and
2308.12519#23
2308.12519#25
2308.12519
[ "2305.14318" ]
2308.12519#25
Rational Decision-Making Agent with Internalized Utility Judgment
9 Preprint then correct their mistake in the second attempt. Based on these methods, some tree-based decision- making methods are proposed to adapt the decision-making ability of LLMs into specific tasks. Tree-of-Thought (Yao et al., 2023) proposes BFS and DFS decision-making algorithms in Game of 24, Creative Writing and Mini Crosswords tasks. RAP (Hao et al., 2023a) applies the Monte Carlo Tree search algorithm to find a good solution in Blocksworld, Math Reasoning, and Logical Reasoning tasks. DFSDT (Qin et al., 2023c), following a similar tree search algorithm, proposes an efficient version of DFS to make decisions. However, the aforementioned methods need task- specialized external performance measure to guide the decision-making process, which limits their scope of application. In this paper, we propose RADAGENT which internalizes the utility judgment ability with Elo rating system to achieve rationality for agents to provide optimal solutions. Tool Learning Recent investigations have cast illumination upon the burgeoning proficiencies ex- hibited by LLM-based agents in the mastery of instruments and the execution of decision-making processes within intricate contextual milieus (Qin et al., 2023b; Vemprala et al., 2023; Nakano et al., 2021; Qin et al., 2023a; Shen et al., 2023; Wu et al., 2023; Schick et al., 2023; Hao et al., 2023b; Qian et al., 2023; Song et al., 2023; Qin et al., 2023c). The incorporation of external tools into the operational framework of LLM-based agents confers upon them immediate access to contem- poraneous factual knowledge (Yang et al., 2023), imbues them with versatile multimodal capabili- ties (Gupta & Kembhavi, 2023), and empowers them with specialized proficiencies tailored to ver- tical domains (Jin et al., 2023). However, when confronted with real-world tasks that often require the utilization of multiple tools, LLM-agents must engage in multi-step decision-making processes to select tools and determine their sequencing. Consequently, the ability for decision-making in tool learning scenarios becomes imperative to effectively tackle practical applications. # 7 CONCLUSION
2308.12519#24
2308.12519#26
2308.12519
[ "2305.14318" ]
2308.12519#26
Rational Decision-Making Agent with Internalized Utility Judgment
In this work, we have introduced a novel approach, RADAGENT, to internalize the utility judgment ability for agents to achieve rationality across a diverse range of real-world tasks. The introduction of an Elo-based Utility Construction enhances agents to learn numeric utility for each decision step and guide the decision-making process. Extensive experiments on the ToolBench dataset have confirmed the effectiveness of RADAGENT, outperforming baseline methods by achieving notable Pass Rate improvements and producing higher-quality solutions. Moreover, the reduction in LLM API calls showcases the efficiency gains of our approach. By empowering agents with rationality, our work paves the way for their broader utilization in real-world scenarios, alleviating the reliance on external performance measure. # REFERENCES # Agentgpt. Python. https://github.com/reworkd/AgentGPT, 2023. K. Arrow. Rational choice functions and orderings1. Economica, 26:121, 1959.
2308.12519#25
2308.12519#27
2308.12519
[ "2305.14318" ]
2308.12519#27
Rational Decision-Making Agent with Internalized Utility Judgment
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023. Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evalu- ations? arXiv preprint arXiv:2305.01937, 2023. AE Elo. The proposed uscf rating system, its development, theory, and applications. chess life xxii (8): 242â 247, 1967. Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14953â
2308.12519#26
2308.12519#28
2308.12519
[ "2305.14318" ]
2308.12519#28
Rational Decision-Making Agent with Internalized Utility Judgment
14962, 2023. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023a. 10 Preprint Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. arXiv preprint arXiv:2305.11554, 2023b. J. Hendler. Is there an intelligent agent in your future? nature. 1999. Qiao Jin, Yifan Yang, Qingyu Chen, and Zhiyong Lu.
2308.12519#27
2308.12519#29
2308.12519
[ "2305.14318" ]
2308.12519#29
Rational Decision-Making Agent with Internalized Utility Judgment
Genegpt: Augmenting large language models with domain tools for improved access to biomedical information. ArXiv, 2023. D. Kahneman and A. Tversky. Choices, values, and frames. 2000. P. Maes. Agents that reduce work and information overload. Commun. ACM, 37:30â 40, 1994. # Yohei Nakajima. Babyagi. Python. https://github. com/yoheinakajima/babyagi, 2023.
2308.12519#28
2308.12519#30
2308.12519
[ "2305.14318" ]
2308.12519#30
Rational Decision-Making Agent with Internalized Utility Judgment
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. ArXiv preprint, abs/2112.09332, 2021. OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/ chatgpt. OpenAI. Gpt-4 technical report, 2023. C. Plott. Path independence, rationality, and social choice. Econometrica, 41:1075â 1091, 1973.
2308.12519#29
2308.12519#31
2308.12519
[ "2305.14318" ]
2308.12519#31
Rational Decision-Making Agent with Internalized Utility Judgment
Cheng Qian, Chi Han, Yi R Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. Creator: Disentangling abstract and concrete reasonings of large language models through tool creation. arXiv preprint arXiv:2305.14318, 2023. Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu, Yankai Lin, Xu Han, Ning Ding, Huadong Wang, et al.
2308.12519#30
2308.12519#32
2308.12519
[ "2305.14318" ]
2308.12519#32
Rational Decision-Making Agent with Internalized Utility Judgment
Webcpm: Interactive web search for chinese long-form question answering. arXiv preprint arXiv:2305.06849, 2023a. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023b. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023c. Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, et al. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563, 2023d. Toran Bruce Richards.
2308.12519#31
2308.12519#33
2308.12519
[ "2305.14318" ]
2308.12519#33
Rational Decision-Making Agent with Internalized Utility Judgment
Auto-gpt: An autonomous gpt-4 experiment, 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. ArXiv preprint, abs/2302.04761, 2023. J. Searle. Speech acts: An essay in the philosophy of language. 1969.
2308.12519#32
2308.12519#34
2308.12519
[ "2305.14318" ]
2308.12519#34
Rational Decision-Making Agent with Internalized Utility Judgment
Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. Algo- rithm of thoughts: Enhancing exploration of ideas in large language models. arXiv preprint arXiv:2308.10379, 2023. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface, 2023.
2308.12519#33
2308.12519#35
2308.12519
[ "2305.14318" ]
2308.12519#35
Rational Decision-Making Agent with Internalized Utility Judgment
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023. Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world applications via restful apis. arXiv preprint arXiv:2306.06624, 2023. 11 Preprint Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor. Chatgpt for robotics: Design principles and model abilities. Technical Report MSR-TR-2023-8, Microsoft, February 2023. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926, 2023. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. M. Wooldridge and N. Jennings. Intelligent agents: theory and practice. The Knowledge Engineering Review, 10:115 â 152, 1995. Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Vi- sual chatgpt: Talking, drawing and editing with visual foundation models. ArXiv preprint, abs/2303.04671, 2023. Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, and Xindong Wu.
2308.12519#34
2308.12519#36
2308.12519
[ "2305.14318" ]
2308.12519#36
Rational Decision-Making Agent with Internalized Utility Judgment
Chatgpt is not enough: En- hancing large language models with knowledge graphs for fact-aware language modeling. arXiv preprint arXiv:2306.11489, 2023. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. ArXiv preprint, abs/2210.03629, 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. # A SELF-JUDGMENT PROMPT Our self-judgment prompt is designed as follows: You are value-GPT, an expert in defining which trail is better and closer to solving the task. Here is the task description: ******************************* {{BEGIN_DESCRIPTION}} your_task: {task_description} your_query: {input_description} {{END_DESCRIPTION}} ******************************* Here are two candidates A and B. They both try to handle the task with some function calls. Their trails are as follows. ******************************* {{CANDIDATE_A_START}} {candidate_A} {{CANDIDATE_A_END}} ******************************* {{CANDIDATE_B_START}} {candidate_B} {{CANDIDATE_B_END}} ******************************* Then, ChatGPT should call the following function2 to give the judgment result. { "name": "choose_preference", # 2https://openai.com/blog/function-calling-and-other-api-updates 12 Preprint "description": "Choose the preferred answer for the query within all given answers.", "parameters": { "type": "object", "properties": { "preference": { "type": "number", "description": "The index of the preferred answer in all given answers." }, }, }, } 13
2308.12519#35
2308.12519
[ "2305.14318" ]
2308.12503#0
CGMI: Configurable General Multi-Agent Interaction Framework
3 2 0 2 g u A 8 2 ] I A . s c [ 2 v 3 0 5 2 1 . 8 0 3 2 : v i X r a # CGMI: Configurable General Multi-Agent Interaction Framework Jinxin Shi1, Jiabao Zhao1*, Yilei Wang1, Xingjiao Wu2, Jiawen Li1, Liang He1 1School of Computer Science and Technology, East China Normal University, Shanghai, China 2School of Computer Science, Fudan University, Shanghai, China [email protected], [email protected], [email protected], xjwu [email protected], [email protected], [email protected] # Abstract
2308.12503#1
2308.12503
[ "2302.01560" ]
2308.12503#1
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the po- tential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive archi- tecture. To address this, we present the Configurable Gen- eral Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifi- cally, we propose a tree-structured methodology for the as- signment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also in- tegrated general agents to augment the virtual environmentâ
2308.12503#0
2308.12503#2
2308.12503
[ "2302.01560" ]
2308.12503#2
CGMI: Configurable General Multi-Agent Interaction Framework
s realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The ex- periments indicate that aspects such as the teaching method- ology, curriculum, and student performance closely mirror real classroom settings. We will open source our work. Introduction Agent-based social simulation (ABSS) simulates social in- teractions in a virtual environment. By observing agent be- havior, we can explore complex social phenomena and ver- ify the effects of different social strategies in a controlled setting(Davidsson and Paul 2002). However, improving sim- ulation accuracy and designing complex agents remain key challenges(Aher, Arriaga, and Kalai 2023). With the capa- bilities of large language models (LLMs) such as GPT4 (OpenAI 2023), we can construct more complex environ- ment and create more realistic agents to simulate social phe- nomena. However, when using LLMs to complete ABSS tasks, the following issues need to be addressed: (1) How to trigger the capabilities of LLMs to solve complex problems? (2) How to ensure that agents have a stable role and behav- ior output based on LLMs without forgetting? (3) How to design a communication mechanism for LLMs-based agents to truly simulate interactions? Existing LLMs-based agents are mainly divided into ac- tion agents (Yao et al. 2023; Press et al. 2023) and plan-and- execute agents (Wang et al. 2023a). Action agents make de- cisions based on previous outputs and are suitable for small tasks. Plan-and-execute agents formulate and execute action plans, suitable for long-term goal tasks. However, in com- plex scenarios, LLMs-based agents may produce mechani- cal and superficial content or not execute according to the plan. Inspired by the Adaptive Control of Thought (ACT*) model (Anderson and R 1983), we designed a cognitive ar- chitecture equipped with skill library for agents. Specifi- cally, we employ the Chain of Thought (CoT) and Chain of Action (CoA) methods to extract declarative and procedural memories from the agentâ s working memory. During the re- flection and planning processes, content is retrieved from the skill library, ensuring deeper and more specialized insights.
2308.12503#1
2308.12503#3
2308.12503
[ "2302.01560" ]
2308.12503#3
CGMI: Configurable General Multi-Agent Interaction Framework
Assigning each intelligent agent with a unique identity, personality, and capability (Wang et al. 2023c) can offer a more humanized and emotional interactive experience, and also enhance the realism of simulating complex social sce- narios (Argyle et al. 2023). Although LLMs like GPT4 pos- sess strong role-playing capabilities, we found that LLMs tend to forget the original character settings in multi-turn di- alogues and make decisions that are inconsistent with the characterâ
2308.12503#2
2308.12503#4
2308.12503
[ "2302.01560" ]
2308.12503#4
CGMI: Configurable General Multi-Agent Interaction Framework
s design. Additionally, due to the limitations of the context window, itâ s challenging to set roles comprehen- sively and in fine detail. To address these issues, this paper introduces a tree-structured persona model for character as- signment, detection, and maintenance, which is beneficial for agent interaction performance. Influenced by assistant repeats instruction, infinite loop of messages, and conversation termination conditions, it re- mains challenging for chat agents to automatically collabo- rate to accomplish tasks in specific scenarios(Li et al. 2023). Setting scenario-adapted general agents is used to solve scenario-specific tasks for role agents, can help role agents avoid the aforementioned problems and enhance the real- ism of virtual scenes. For this purpose, this paper explores a Configurable General Multi-Agent Interaction Framework (CGMI), that can simulate real-life scenarios by binding general agents with role agents.
2308.12503#3
2308.12503#5
2308.12503
[ "2302.01560" ]
2308.12503#5
CGMI: Configurable General Multi-Agent Interaction Framework
In this work, we take the â classroom teaching scenarioâ as an example, employing the CGMI framework to simulate the teaching process between â teacherâ and â studentsâ , in- cluding teacher agent, student agents, assistant agents and supervisory agents. The experimental results indicate that the interactions in the virtual classroom aligns with actual teaching. It helps to assist in teacher instruction, evaluate teaching competencies, and validate teaching hypotheses. In summary, the major contributions of this paper are threefold: â ¢ The introduction of cognitive structure equipped with skill library, combining human cognition and skill library retrieval, enabling agents to engage in deep reflection and planning. â ¢ Designed a tree-structured approach for assigning, de- tecting, and maintaining the personal traits of agents, which reduces memory pressure on agents and improves stability. â ¢ The construction of a Configurable General Multi-agent Interaction framework (CGMI), supporting social exper- imental research in specific scenarios. Related Work In this section, we will review agent research for solving domain problems, as well as agent research for simulating real human interaction processes. Agents for Solving Domain Problems Recent studies in LLMs have explored the utilization of agent systems for domain-specific tasks across various sectors. In healthcare, (Nair et al. 2023) introduced a multi-agent system that enhances treatment recommenda- tions via communication feedback. (Qian et al. 2023) pre- sented CHATDEV: a simulated development team where agents oversee design, coding, testing, and documenta- tion, thereby ensuring effective game development coor- dination. (Alexandru et al. 2015) designed a multi-agent e-learning environment tailored for education, providing customized support for instructional decisions. ChemCrow, highlighted in (Bran et al. 2023), formulated a framework that grants agents access to external knowledge reposito- ries, consequently amplifying their efficacy in areas like or- ganic synthesis, drug discovery, and materials design. (Wang et al. 2023b) unveiled the DEPS interactive planning tech- nique, addressing long-term planning challenges within the Minecraft game. Collectively, these investigations illumi- nate agent applications tailored to particular domains and hurdles.
2308.12503#4
2308.12503#6
2308.12503
[ "2302.01560" ]
2308.12503#6
CGMI: Configurable General Multi-Agent Interaction Framework
Agents for Simulating Human Interactions A subsequent line of research focuses on crafting agents that emulate human social behaviors. (Park et al. 2022) fash- ioned a multi-agent town emulating authentic human activ- ities, including orchestrating social parties. (Li et al. 2023) delved into an agent communication framework that facil- itates varied social roles and simulates AI social patterns. Emphasizing the importance of social situational learning, (Krishna et al. 2022) developed an interactive agent capable of querying individuals online to assimilate visual knowl- edge. In the educational realm, (Markel et al. 2023) em- ployed GPT and other LLMs to mimic students, thus of- fering tangible training avenues for educators. (Jiang et al. 2023) explored the simulation of consistent personality and gender variations using conditional language models. Cu- mulatively, these studies accentuate agentsâ capacities to as- similate or mimic human social interactions. to details. â Descriptionâ : Big Five personality â Scoreâ : Openness to Experience â Scoreâ : 16 Figure 1:
2308.12503#5
2308.12503#7
2308.12503
[ "2302.01560" ]
2308.12503#7
CGMI: Configurable General Multi-Agent Interaction Framework
Tree structure of the Big Five Personality Scale. The root node has five sub-nodes, representing five coarse personalities. Their dimension values range from 5-25, and each coarse personality has five fine-grained leaf nodes, with dimension values ranging from 1-5. The larger the value, the more pronounced the characteristics of agents. Method In this section, the tree-structured approach for personality assignment, detection and maintenance, the cognitive struc- ture model enhanced with a skill library, and the construc- tion process of CGMI will be introduced respectively.
2308.12503#6
2308.12503#8
2308.12503
[ "2302.01560" ]
2308.12503#8
CGMI: Configurable General Multi-Agent Interaction Framework
As shown in Figure 2, the process of reconstructing the â class- room teachingâ scenario based on CGMI is displayed. Tree-Structured Persona Model Agent entities with unique personalities can not only com- plete specific tasks, but also enhance the authenticity of in- teractions (Qian et al. 2018; Mara Pudane and Radin 2017). In addition to setting specific personalities for agent entities, it is also necessary to set related styles according to the ap- plication scenario. For example, in teaching, teacher and stu- dents can have their own teaching and learning styles. How- ever, if only a rough persona is set for agents, the person- alized differences in its interactions are not obvious, and its stability will decrease as the complexity of roles, scenarios, and the length of the context increase (Jiang et al. 2023). this work proposes a tree- structured persona model for personality assignment, de- tection, and maintenance. We referred to the Big Five Per- sonality Scale (John, Srivastava et al. 1999), the teaching style scale (Grigorenko and Sternberg 1993), and the learn- ing style scale (Soloman and Felder 2005), and designed a tree structure to help agents remember and set different per- sonas. Taking personality setting as an example, as shwon in Figure 1, we built a personality scale T = {N1, N2, ..., Nn} based on the Big Five Personality Scale, where n = 26. N1 is the root node, and N2 to Nn are child nodes. Each node Ni includes a description Di and a score Si. As shown in Algorithm 1, we use depth-first traversal to set personality traits for the intelligent entity A.
2308.12503#7
2308.12503#9
2308.12503
[ "2302.01560" ]
2308.12503#9
CGMI: Configurable General Multi-Agent Interaction Framework
During the detection and maintenance process, this pa- per adopts an efficient random testing method, with the fol- lowing specific steps: (1) Randomly select m coarse-grained Step 1: Personalized Instructional Design Step 2: Customizable Role Configuration (Tre ws ser can modify the design in the role of observer agen!) (Personality, Cognitive level, learning nh) ee Teacher Students » 1 Agent Agent 4 YingZheng Ryan | he ~) ct ! aa 1 Course! ' = @_â __xteaching 3.1nstructional 1»! => i 1 1 1 \ 1 Objecti D | Topics 1.Learning â esign o Â¥, ~ Supervised Agent *) -=--7 Situation aaa am | 1- Supervise the teaching process | Analysis 4.Lesson â _5.Intention hie Stith Mu 2. Check the consistency of agent j Planning Analysis _ 7 Emily One aon ; Step 3: Teaching Implementation i BA 1 S (Teaching activities are dynamically adjusted according to the skill library and student feedback) \-------% Or)â Overall, we have addressed the cognitive, affective I think it can be applied in problem- solving scenarios of physics, tatiana ea Figure 2: Based on CGMI, a classroom teaching scenario is constructed. This scenario includes 3 general intelligent agents (teaching assistant agent, teaching process supervisor agent, consistency checker agent) and 6 role agents (teacher Mrs. Smith, student Ying Zheng, student Emily, student John, student Ryan and student Samantha). After the user inputs the course topic, the virtual classroom teaching scenario launches. The teaching assistant agent generates corresponding teaching plans and distributes them to Mrs. Smith and the teaching process supervisor agent. Mrs. Smith divides the teaching process into stages according to the plan. The teaching process supervisor agent monitors whether the current stage has ended and decide whether to enter the next stage. Before each role agentâ s statement, the consistency checker agent detects and maintains consistency between its personality and statement content.
2308.12503#8
2308.12503#10
2308.12503
[ "2302.01560" ]
2308.12503#10
CGMI: Configurable General Multi-Agent Interaction Framework
When Mrs. Smith asks the class questions, the consistency checker agent judges each studentâ s willingness to answer based on personality and classroom status, simulating real hand-raising. Algorithm 1: The process of endowing the Big Five person- alities through Deep First Traverse (DFS) implementation. Input: Big Five Scale T , Agent A Output: A = {T } 1: Define stack 2: Push root node of T into stack 3: while stack is not empty do 4: Ni = stack.pop() 5: 6: 7: 8: 9: end while 10: return A = {T } A get(Ni.Di, Ni.Si) if Ni has child nodes then # end if
2308.12503#9
2308.12503#11
2308.12503
[ "2302.01560" ]
2308.12503#11
CGMI: Configurable General Multi-Agent Interaction Framework
personalities for testing; (2) If the test is correct, select m fine-grained personalities under these m coarse-grained per- sonalities for further testing. If the fine-grained test is also correct, it is believed that the agentâ s personality memory is complete; (3) If an error occurs at any stage, the real values of all selected personalities will be informed to the agent to restore its personality memory. This random testing method is not only efficient and comprehensive but also saves contextual window resources. Multi-level testing can avoid the illusion of unchanged coarse-grained personality due to changes in fine-grained personality. This method can also be applied to other related character scales, as detailed in Appendix. Cognitive architecture equipped with skill library Over time, as interactions between the agent and its envi- ronment accumulate, thereâ s a marked increase in the vol- ume and intricacy of the agentâ s memory stream.(Park et al. 2023; Weng and Lilian 2023) This proliferation necessitates an advanced cognitive architecture to process the burgeon- ing data. However, the current cognitive architecture embed- ded in LLMs-based agents can only allow agents to plan and reflect in a linear fashion, reminiscent of an assembly line. To redress this shortfall, this paper introduces the cog- nitive architecture infused with a domain-specific skill li- brary, rooted in the Adaptive Control of Thought (ACT*) paradigm(Anderson and R 1983). This novel architecture fa- cilitates parallel and bidirectional planning and reflection, drawing upon the agentâ s memory and skill repository, thus steering agent development towards enhanced adaptive con- trol and rational deliberation akin to human cognition. Central to this cognitive framework are four pivotal com- ponents, as delineated in Figure 3. The foundational pil- Declarative Memory [_rerecr \@a] Skill Library Jo{ Pe J Working Memory Get from the Action to the outside outside Procedural Memory Summarize by COA Summarize by COT 1G rf <i Figure 3: The cognitive architecture with skill library. lars of agent cognition are Declarative (Md) and Procedural Memory (Mp). The former embodies the agentâ s library of factual knowledge, encompassing data on objects, individ- uals, locales, occurrences and their interconnections, serv- ing as the cornerstone for rational deduction.
2308.12503#10
2308.12503#12
2308.12503
[ "2302.01560" ]
2308.12503#12
CGMI: Configurable General Multi-Agent Interaction Framework
Procedural memory, on the other hand, comprises operational guide- lines that empower the agent to pursue objectives and sur- mount challenges. These guidelines operate by matching with facts stored declaratively, triggering actions geared to- wards achieving specific objectives. Skill Library (L) is a configurable domain knowledge base that provides domain knowledge for the reflective planning of intelligent agents. It can be viewed as a compilation of the agentâ s abilities to leverage its knowledge in situation-specific ways. Work- ing Memory (Mw) is an agile, self-refreshing module act- ing as a bridge between memory and the external milieu. It not only directs agent actions based on processed memories but also assimilates external data, subsequently refining it into declarative and procedural knowledge via the Chain of Thoughts (CoT) and Chain of Actions (CoA). When starting interaction, an agent, denoted as A = {T, B} and equipped with the cognitive architecture B = {Mw, Md, Mp, L}, seamlessly activates these four compo- nents, ensuring prolonged engagements in multifaceted set- tings. Formally, the mechanism through which the agent gleans information from the external realm at a given time t is depicted as Fget(t). Upon temporary storage in Mw, the agent A distills this information using thought and action chains, leading to the formation of Declarative and Procedural Memory: # Md(t) = Fsum(Pcot + Mw(Fget(t))) Mp(t) = Fsum(Pcoa + Mw(Fget(t))) where Pcot signifies the CoT prompt (e.g., â Summarize the class content sequentiallyâ ), while Pcoa denotes the CoA prompt (e.g., â Detail the pedagogical stepsâ ). Fsum de- lineates the process of condensing information within the Working Memory. In subsequent interactions, when agent A readies its response for moment t + 1, it first taps into Md, Mp, and L, extracting reflections and strategies from the pre- ceding moment, t, which then translates into overt actions: # R(t) = Fref (Md(t) + L) P (t) = Fpla(Mp(t) + L) R(t) = Frep(Ma(t) + L) (3) P(t) = Fyia(Mp(t) + L) (4)
2308.12503#11
2308.12503#13
2308.12503
[ "2302.01560" ]
2308.12503#13
CGMI: Configurable General Multi-Agent Interaction Framework
# ACT (t + 1) = Fact(R(t) + P (t) + Mw(Fget(t)) (1) (2) (3) (4) (5) where Fref and Fpla illustrate the reflection and synthesis processes for Declarative and Procedural Memory at mo- ment t, respectively. R(t) and P (t) represent the reflective and strategic outcomes at time t, while Fact encapsulates the amalgamation of these insights, plans, and the skill reper- toire to forge ACT (t + 1). Configurable General Multi-Agent Interaction Framework With the support of structured persona models and enhanced cognitive models with skill libraries, a single agent can play multiple roles in specific scenarios to complete com- plex tasks. However, currently, using LLMs-based agents to achieve preset goals in specific tasks often fails to present real social interactions, because simulating social phenom- ena requires multiple Agents to interact and cooperate in a human-like manner. Therefore, this paper introduces the Configurable General Multi-Agent Interaction Framework (CGMI) that can simulate real interactions. In the context of classroom teaching, this paper explores how CGMI promotes interaction and collaboration among multiple agents. In addition to virtual teacher Agent and vir- tual student Agents, we have also designed assistant Agents responsible for setting educational goals, planning teaching schedules, and analyzing studentsâ willingness to speak to support teacherâ s teaching activities.
2308.12503#12
2308.12503#14
2308.12503
[ "2302.01560" ]
2308.12503#14
CGMI: Configurable General Multi-Agent Interaction Framework
These assistant Agents can adjust their functional configurations based on specific scenarios. To ensure the quality of the interaction process, we introduced a supervisory Agent responsible for detecting â personality forgettingâ , ensuring that the â teacher Agent proceeds with teaching as plannedâ , and â determining when to end the discussionâ . Through the CGMI framework, each intelligent entity can engage in more in-depth personalized dialogues and task completion, collaboratively creating a re- alistic virtual teaching environment. Using classroom teaching as an example, based on cog- nitive structure and persona models, the intelligent agent A = {T, B} can play different roles in specific scenarios. The state of the classroom at time t is represented as: ST A(t) = I(Atea, Astu, t) (6) Where I represents the interaction process, Atea represents the teacher, and Astu represents a set of students, denoted as {Astu1, Astu2, ..., Astun }. Interact represents the interaction between the teacher and students. When the lesson begins, the supervisory Agent Asup re- ceives the teaching plan T P and the multi-stage teaching process T S decomposed by the teacher. Asup monitors the classroom, obtains the phase transition signal, and decides whether to proceed to the next teaching phase or end the les- son.
2308.12503#13
2308.12503#15
2308.12503
[ "2302.01560" ]
2308.12503#15
CGMI: Configurable General Multi-Agent Interaction Framework
This can be represented as: SIG(t) = Asup(T P + T S + ST A(t)) (7) With the help of Asup, teachers can teach more effec- tively, and the interaction between teachers and students is more targeted, without deviating from the topic. During the questioning session, the supervisory Agent selects the most suitable student to ask questions based on the studentâ s cog- nitive analysis of their willingness to speak. The supervi- sory Agent also monitors the persona status of the intelligent
2308.12503#14
2308.12503#16
2308.12503
[ "2302.01560" ]
2308.12503#16
CGMI: Configurable General Multi-Agent Interaction Framework
Class process: Mrs. Smith: Quadratic equations can be found in various fields, from ... Emily: I'm really nervous about this lesson on quadratic equations. Mrs. Smith: Emily, but please know that I am here to... Course-ONE A Reflection: rts. ... Student interests. J need more encouragement for my students, Emily gets nervous when facing math. Mrs. Smith utilized ... Plan: - Using interesting forms and gamified teaching to stimulate studentsâ interest in learning and reduce resistance.... Course-TWO Class process:
2308.12503#15
2308.12503#17
2308.12503
[ "2302.01560" ]
2308.12503#17
CGMI: Configurable General Multi-Agent Interaction Framework
Mrs. Smith: ... Can anyone explain how the coefficients 'bâ and 'c' influence the quadratic function's graph?... Emily: The coefficient 'b' in the quadratic function affects ... Mrs. Smith: Excellent explanation, Emily. I'm glad to see that you're no longer afraid of mathematics! You... Reflection: Mrs. Smith effectively engages and motivates students in learning about quadratic functions... Plan: - ...involve changing different parameters of the quadratic function (such as coefficients and constants)... Course-THREE Class process:
2308.12503#16
2308.12503#18
2308.12503
[ "2302.01560" ]
2308.12503#18
CGMI: Configurable General Multi-Agent Interaction Framework
Mrs. Smith: ... Remember, learning is a journey that is best enjoyed together. Let's embark on this exciting... John: ...Could you provide an example for us ... Reflection: ...Sometimes students may not understand and they may need more examples... Plan: - .., their understanding and application of quadratic function. ...using the example of buying apples... Figure 4: Teacher Mrs Smithâ s classroom experience and her reflection and planning in virtual classroom. The red, green, and blue characters in the picture represent the events discovered by the teacher in three different classes. The teacher reflects and plans on these events, and serves as a focus in the subsequent teaching process.
2308.12503#17
2308.12503#19
2308.12503
[ "2302.01560" ]
2308.12503#19
CGMI: Configurable General Multi-Agent Interaction Framework
agents in real-time and maintains it if thereâ s any deviation. Users can also operate the supervisory Agent to adjust the classroom process according to their needs. # Experiments In this section, we first present the â classroom teaching sce- narioâ reconstructed using the CGMI framework and ana- lyze the teaching behaviors during the class. Subsequently, through comparative experiments, we showcase the behav- ioral advantages of agents equipped with human intrinsic traits (such as personality, cognitive structures, etc.). Lastly, we analyze the significance of generic intelligent agents in enhancing the interaction logic of role-specific agents. In our experiment, we adopted OpenAIâ s gpt-3.5-turbo-16k model (OpenAI 2022), instantiating one teacher, five stu- dents, and four generic intelligent agents. Each agent was given a unique role setting and task objective (see appendix).
2308.12503#18
2308.12503#20
2308.12503
[ "2302.01560" ]
2308.12503#20
CGMI: Configurable General Multi-Agent Interaction Framework
Categories B1.Accept feeling B2.Praises or encourages B3.Accept ideas B4.Asks questions B5.Lecturing B6.Gives directions B7.Criticising B8.Pupil talk response B9.Pupil talk Initiation C1 0.35% 19.08% 12.99% 11.98% 6.39% 3.89% 1.77% 1.03% 22.97% 33.61% 35.61% 6.36% 7.01% 1.24% 5.65% 28.62% 20.41% 21.56% 11.31% 17.32% 17.07% C2 0% C3 0.30% 5.69% 1.50% 5.09% 1.20% Table 1: Analysis results based on FIAS These sessions focused on the following topics: C1: Con- cept of the Quadratic Equation, C2: Methods for Solving the Quadratic Equation, and C3: Applications of the Quadratic Equation.
2308.12503#19
2308.12503#21
2308.12503
[ "2302.01560" ]
2308.12503#21
CGMI: Configurable General Multi-Agent Interaction Framework
# Analysis of Teaching Behavior We employed the Flanders Interaction Analysis System (FIAS) to examine interactive behaviors between teachers and students across three virtual classroom sessions. We hired 2 trained experts to encode the teaching behaviors. These two encoders worked independently, encoding each sentence once and sequentially constructing a behavior se- quence, ultimately achieving consistent evaluation results. Table 1 shows the proportion of each interaction behav- ior in the course. Overall, the variety of interactions in the virtual classroom is rich and consistent with actual teaching, validating the effectiveness of CGMI by demonstrating its ability to effectively organize interactions and collaboration between multi-agents. According to the results in table 1, teacherâ s behavior(B1, B2, B3, B4, B5, B6, B7) made up an average of 61.23% of the discourse in these mathematics sessions. In contrast, stu- .. I'm excited to learn more about the... ..J'm really nervous about this The first half of Cl ¢ â lesson on quadratic equations... ! Emily: will do my best to 1 - Emily - : 1 overcome the anxiety and ! ...J'm excited to explore ...I have no ideas. But, I will make 1 understand quadratic equations. ' this topic further... & an effort to pay attention... 1 [appreciate ... | ine Tm also looking forward â on . ..I'm also looking forward to ily: Iâ iar wit .I'm really excited to a ; . nn ; Emily: dm unfamiliar with ' learn about the... 4 manne, ani working on some ! quadratic equations, but I'm 1 Ryan examples with my classmate... | willing to learn and explore I . ' different forms... ] ...I'm really excited to ...I always make sure to double ! I delve into... check my work, so... ' | The second half of C1 ! 1 Samantha 1 . ...I'm always excited to ...balance in my life. While my 1{ Emily: As an average learner, ' explore how assion for learning and m 1 Tmay need some time to grasp P. aâ pass s Yee | the concepts of quadratic I No Personality Ying Zheng With Personality A equations. 1 Soo eee eee eee ee ?
2308.12503#20
2308.12503#22
2308.12503
[ "2302.01560" ]
2308.12503#22
CGMI: Configurable General Multi-Agent Interaction Framework
Figure 5: The influence of personal traits on agent expression. dentsâ behavior(B8, B9) facilitated by teacher prompts rep- resented an average of 23.53%. Notably, the ratio of indi- rect influence behaviors (B1, B2, B3, B4) to direct influence behaviors (B5, B6, B7) remained below 1. This suggests that the virtual classroom is dominated by teachers who have direct control over the overall classroom. Furthermore, student-initiated interactions constituted about 15.23%, sug- gesting that students remain engaged, deliberating, and re- sponding to queries under the teacherâ
2308.12503#21
2308.12503#23
2308.12503
[ "2302.01560" ]
2308.12503#23
CGMI: Configurable General Multi-Agent Interaction Framework
s guidance. # Intrinsic Characteristics of Intelligent Agents To assess the efficacy of the proposed cognitive architecture, we examined it through the lens of a teacher, Mrs. Smith, analyzing her classroom practices and her subsequent re- flections and plans. As illustrated in Figure 4, we displayed the part of her reflective and planning processes within a single lesson and across two different lessons. Our analysis sought to elucidate the influence of the cognitive structure on agents, emphasizing the modelâ s capacity for both reflection and planning. We analyzed the effectiveness of the algorithm from within and between classes. (1) Within the lesson: In Course-ONE, student Emily conveyed her anxiety, stating, â Iâ m really nervous about this lesson.â Mrs. Smith, attuned to this feedback, incorporated it into her reflective process and instructional planning. Draw- ing from a library of teaching techniques, she employed strategies such as heightened encouragement and gamified instructional methods. A parallel observation was made in Course-TWO and Course-THREE. Mrs. Smith prompted students to consider, â How do coefficients â bâ and â câ af- fect the graph of a quadratic function?â
2308.12503#22
2308.12503#24
2308.12503
[ "2302.01560" ]