id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.02151#8 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Among the prevalent transformer RL methods are online RL algorithms such as Proximal Policy Optimization or PPO (Schulman et al., 2017), and offline RL techniques such as Implicit Language Q-Learning or ILQL (Snell et al., 2022) and Direct Preference Optimiza- tion or DPO (Rafailov et al., 2023). These methods have been implemented in TRL/TRLX (von Werra et al., 2020) distributed training framework. # 3 CHALLENGES Although LLMs are not designed to handle tool use or take actions, it has been observed (Gravitas, 2023; Nakajima, 2023; Chase, 2023) that empirically for text-rich environment, especially when the actions and states are accurately described using natural languages, LLMs work surprisingly well. However there are still plenty of challenges applying LLM-based agents. Here we list several below. Spurious Actions LLMs are not pre-trained or designed with an action-agent application in mind. Even some restrictions are explicitly specified in the prompt, the LLM model may still generate spurious actions that are not in the action space A. | 2308.02151#7 | 2308.02151#9 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#9 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | 3 Limited Prompt Length LLM itself is stateless. However, in applications it is preferred to em- power agents with states or memories for better performance. It has been observed that LLM based agents are easy to run into infinite loops if the states are not handled nicely. Many LLM agents concatenate all the previous state descriptions and actions into the prompt so that LLM as a way to bestow â stateâ to the LLM. Inevitably this methodology runs into the prompt length issues. As the trajectory grows longer, the prompt runs out of spaces. Heuristic Prompt Engineering Even though a lot of paradigms have been proposed to improve LLM agentsâ performance (Yao et al., 2023; Ahn et al., 2022), there is a lack of systematic method- ologies for consistent model refinement. In fact, manual prompt tuning is still widely used in a lot of the application scenarios. Prohibitive Training Most of the well-performing LLMs are too large to be fit in just one or two GPUs. It is technically challenging to optimize the LLMs directly as is done in the the classical reinforcement learning setting. In particular, OpenAI has not provided any solution for RL based finetuning. Most of the issues are caused by the fact that LLMs are not pre-trained or designed with an action-agent application in mind. | 2308.02151#8 | 2308.02151#10 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#10 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | 4 INTUITION Compared to the LLM-based action agents, classical RL agents, though not able to handle text-based environments as nicely in the zero shot setting, are able to keep improving based on the feedback and rewards provided by the environment. Popular RL algorithms include Policy Gradient (Sutton et al., 2000), Proximal Policy Optimization Algorithm (PPO) (Schulman et al., 2017), Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), and Advantage Actor Critic methods (Mnih et al., 2016). In this draft we are proposing a simple but powerful novel framework to tackle the challenges men- tioned above. On one hand, we would like to leverage the classical RL based optimization algorithms such as policy gradient to improve the model performance. On the other hand, our framework avoids finetuning on the LLM directly. The key is, instead of training the LLM directly, we train a retro- spective LM. The retrospective LM takes usersâ prompt, rewards and feedback from the environment as input. Its output will be prompt for the actual LLM to be consumed. RL algorithms are employed to optimize the weights in the retrospective LM model instead of directly on the LLM. In our frame- work the weights in the actual LLM is assumed to be fixed (untrainable), which aligns well with the application scenario when the LLM is either too large to tune or prohibited from any tuning. Another perspective viewing our framework is, we train a retrospective LM to apply automatic prompt tuning for the LLM agents. In this case, the RL algorithms such as policy gradients are employed to optimize the prompts. Ideally the retrospective LM can help summarize the past â ex- perienceâ , the usersâ prompt, the environmentsâ feedback into a condensed text with length limit so that it is easier for the LLM to digest. To some extent, in our setting the original LLM can be considered as part of the environment since its parameters are all fixed. # 5 NOTATION AND FORMULATION We denote a large language model (LLM) based agent as a function Lξl : M â A, where M is the space of prompts, which may include the actual prompts mu provided by the users, as well as some contextual information c â C. Here C is the space of context as a representation of the current state S returned by the environment â | 2308.02151#9 | 2308.02151#11 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#11 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | ¦. A is the space of actions. Note the actions taken by most language model based agents are sampled auto-repressively, so L is a random function. The subscript ξl denotes the re-parameterized random variables involved in the sampling process. Another note is, the LLM-based agent itself is stateless. All the states and possible memorization are characterized as text in the prompt. The environment is defined as a tuple (Tξo, R). Tξo : S à A â S is the state transition function, where S is the space of states and A is the action space. Here we assume the states and actions are represented using text. Again we used ξo to represent the randomness involved in the state transition. | 2308.02151#10 | 2308.02151#12 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#12 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | 4 For each state s â ¬ S, a reward function is defined as R : S + R. At each step of the play, the state s is described using natural language, and integrated into the context c. In the context, previous states may also be described and embedded to help LLMs making a good guess on the next action to take. As in all the reinforcement learning setting, the final goal is to maximize the cumulative rewards, or returns Geum = an R(s;). In many situations, the rewards are sparse, i.e., R(s,) are mostly zero except very few states. The retrospective model takes the all the previous states s1,··· ,t, actions a1,··· ,t, rewards r1,··· ,t, and the user prompt mu as input, and massage those information into a new prompt m to be consumed by the LLM agent: i ]t Πξr,Î : [Si, Ai, Ri, Mu i=1 â M, (1) where ξr stands for the randomness involved in the retrospective model, and Î is the set of learnable parameters in the retrospective model. The goal of the RL optimization is # T T arg max Ee, ¢.,â ¬, » a) s.t. St41 = Te, (se. Le ols,.e [si,@i,7i, mi']i-1)) , we{l,---,T-1} (2) Note that the only learnable parameters are in the retrospective model Mr. Since LLM agent model is fixed, it can be considered as a component of the environment. Specifically, if we construct another environment with the transition function T â ² = T (S, â ¢) â ¦ L : S à M â S, and the same reward function R, then the objective in Eq. (2) is just a regular RL optimization so all the popular RL algorithms apply. # 6 OUR APPROACH: REINFORCING RETROSPECTIVE LANGUAGE AGENT | 2308.02151#11 | 2308.02151#13 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#13 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | As illustrated in Fig. 2, our proposed framework Retroformer is comprised of two language model components: an actor LLM, denoted as Ma, which generates reasoning thoughts and actions, and a retrospective LLM, denoted as Mr, which generates verbal reinforcement cues to assist the actor in self-improvement by refining the actor prompt with reflection responses. Episode Reflection Retrospective Reflectionâ ) = Retums ~ prompt LM Reflection ED Trial i Trial +1 i â â â ,, Env1 SEDs __,.Reneehen vespenseailâ » Ev ul Leni Trajectory _, prompt â » Actor LM Env 2Retums __," pefigeHonresponse 2» EnV 2Retums (Sy, Gy, My eee, St) | Gy) Gait 2 Env K Returns Env k Returns 42 i >» ReflectionresponseK > (AyrSie1) Environment 1 Ne Action a, Gui Gries â â â â â ~__ Environment 2 t . . os . Rating for reflection response k Environment K T= AGKi= Gxin1 â Gki (a) Retrospective agent (b) Ratings for reflection responses Figure 2: Framework overview. We assume in this paper that the actor model is a frozen LLM whose model parameters are in- accessable (e.g., GPT, Bard) and the retrospective model is a smaller, local language model that can be fine-tuned under low-resource settings. In addition, Retroformer has an iterative pol- icy gradient optimization step which is specifically designed to reinforce the reflection model with gradient-based approach. We provide in this section a detailed description of each of these modules and subsequently elucidate their collaborative functioning within the Retroformer framework. The implementation details are presented in Section 7.2. | 2308.02151#12 | 2308.02151#14 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#14 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | 5 6.1 RETROSPECTIVE AGENT ARCHITECTURE As illustrated in Fig. 2(a), for the base actor model and retrospective model, we apply a standard communication protocol modified from the Relexion agent architecture (Shinn et al., 2023), in which the retrospective model refines the actor prompt by appending verbal feedback to the prompt. Actor Model The actor model is a LLM hosted in the cloud, whose model parameters are hidden and frozen all the time. The actor LM is instructed to generate actions with required textual content, taking into account the observed states. Similar to reinforcement learning, we select an action or generation, denoted as at, from the current policy Ï Î¸ at time step t and receive an observation, represented by st, from the environment. We use ReAct (Yao et al., 2023) as our actor prompt. ari. = Ma ([sk,i75 Qk,i.rs rkitloas Sk,i,t) : (3) Retrospective Model The retrospective model Mr is instantiated as a local LM. Its primary func- tion is to produce self-reflections, offering valuable feedback for diagnosing a possible reason for prior failure and devising a new, concise, high-level plan that aims to mitigate same failure. Operat- ing under a sparse reward signal, such as binary success status (success/failure), the model detects the root cause of failure by considering the current trajectory alongside its persistent memory. | 2308.02151#13 | 2308.02151#15 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#15 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | yk,i = Mr([sk,i,Ï , ak,i,Ï , rk,i,Ï ]T (4) # T leat» @k,i)- # a Reflection prompt «ry. This self-reflection feedback yk,i is appended to the actor prompt to prevent repetitive errors in a specific environment in future attempts. Consider a multi-step task, wherein the agent failed in the prior trial. In such a scenario, the retrospective model can detect that a particular action, denoted as at, led to subsequent erroneous actions and final failure. In future trials, the actor LM can use these self-reflections, which are appended to the prompt, to adapt its reasoning and action steps at time t, opting for the alternative action aâ ² t. This iterative process empowers the agent to exploit past experiences within a specific environment and task, thereby avoiding repetitive errors. Memory Module The actor model generates thoughts and actions, by conditioning on its recent interactions (short-term memory) and reflection responses (long-term memory). | 2308.02151#14 | 2308.02151#16 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#16 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | â ¢ Short-term memory. The trajectory history Ï i of the current episode i serves as the short-term memory for decision making and reasoning. â ¢ Long-term memory. The reflection responses that summarize prior failed attempts are appended to the actor prompt as the long-term memory. To facilitate policy optimization in Section 6.2, we store the prompts and responses of the retrospec- tive model of each trial, as instruction-response pairs, together with the episode returns in a local dataset, which we call replay buffer. We sample from the replay buffer to reinforce the retrospective model at the end of each episode. The long and short-term memory components provide context that is specific to a given task over several failed trials and the replay buffer provides demonstrations of successful reflections across the tasks and environments, so that our Retroformer agent not only exploits lessons learned over failed trials in this task, but also explores by learning from successful reflections in other tasks. | 2308.02151#15 | 2308.02151#17 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#17 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | â ¢ Replay buffer. The memory DRL which stores the triplets (xk,i, yk,i, Gk,i) of the reflection prompt xk,i, reflection response yk,i and episode return Gk,i of trial i and task k. Reward Shaping We apply reward shaping to the binary rewards for obtaining more information. For question answering tasks, instead of exactly matching the answer, we use f1 score grading to evaluate the alignment of the generated output with the expected answer as the reward function. 6.2 POLICY GRADIENT OPTIMIZATION The actor model Ma is regarded as an frozen LLM, such as GPT, with inaccessible model parame- ters. In this scenario, the most direct approach to enhancing actor performance in a given environ- ment is by refining the actor LMâ s prompt. Consequently, the retrospective model Mr, a smaller 6 local language model, paraphrases the actorâ s prompt by incorporating a concise summary of errors and valuable insights from failed attempts. We therefore aim to optimize the Mr model using en- vironment reward. The desired behavior of Mr is to improve the actor model Ma in next attempts. Hence, the difference in episode returns between two consecutive trials naturally serve as a reward signal for iteratively fine-tuning the retrospective model Mr, acting as a surrogate for reinforcement learning aided by gradient-based signals. The specifics of this approach are outlined as follows. Reflection prompt x PPO Reflection response y Ratings trainer (ios aM aac Lone ue GLMCA, | gotstuckin loop where kept trying to search forthe English ee ie = in peer sat alms to mitigate the same failure. \ actor who appeared in both Pennies From Heaven and Kenneth, Gaara Retr fi y Williams: Fantabulosal, but the search term was too general.| r= 0.92 Reflection: etrospective {should have broken it down by searching for the English actor w LM | who appeared in both TV series. Input: trajectory 1._ â â â â â â | : | 2308.02151#16 | 2308.02151#18 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#18 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | J |, I directly looked for the next team he coached after WSU. Previous _â â trial: Question: What is the capital of France? Thought 1:Ineedto » r=-0.31 Input: trajectory K_§ â â â â search â Franceâ and look for the capital. Action 1: Figure 3: Policy gradient optimization of retrospective LM using RLHF training pipeline. Instruction and Response Generation The retrospective model generates a pair of instruction and response at the end of each trial i in the environment k. In the trial i, the actor produces a trajectory Ï i by interacting with the environment. The evaluator then produces a score ri which is computed as rt = Me(Ï i). After the trial, to produce verbal feedback for refining the actor prompt, Mr takes the set of {Ï i, ri} as the instruction xk,i and is prompted to produce a reflection response yk,i. All these instruction-response pairs (xk,i, yk,i) across tasks and trials are stored to a local dataset DRL, which we call â replay bufferâ , for fine-tuning the Mr at the end of each episode. Reflection Response Rating As illustrated in Fig. 2(b), let us assume a reflection prompt xk,i and the corresponding episode return Gk,i, and the retrospective model Mr generates the response yk,i that summarizes the mistakes in i, which results in the return Rk,i+1 in the next attempt i + 1. Because the actor is a frozen LM and the temperature is set to zero as default (Yao et al., 2023), i.e., T = 0, the injected randomness that leads to differences in returns â Gk,i = Gk,i+1 â Gk,i are from the reflection responses yk,i, in which positive â Gk,i indicates better responses that help the actor learn from prior errors, and hence should be rated with higher scores; negative or zero â Gk,i indicates worse responses that needs to be avoided and hence should be rated with lower scores. Therefore, we define the rating score of a reflection instruction-response pair (xk,i, yk,i) as: r(xk,i, yk,i) â Gk,i+1 â Gk,i. (5) | 2308.02151#17 | 2308.02151#19 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#19 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Proximal Policy Optimization The optimization step of Retroformer is formalized as an it- erative process in Fig. 3. We use the differences of episode returns as the ratings of the generated reflection responses. The retrospective language model is fine-tuned at the end of each episode following the standard RLHF training procedures (Ouyang et al., 2022) with proximal policy opti- mization: LPPO = Exâ ¼DRL E yâ ¼LLMRL Ï (x) r(x, y) â β log LLMRL Ï (y|x) LLMRef(y|x) , (6) where (x, y) are sampled from the replay buffer, r(x, y) is the defined reward model, and the second term in this objective is the KL divergence to make sure that the fine-tuned model LLMRL does not stray too far from the frozen reference model LLMRef. The actor and retrospective model, and the policy gradient optimization module work together through trials in a loop until the environment deems Ï t to be correct. In practice, we apply of- fline RL methods instead of online optimization. We collected the dataset DRL by rolling out a base policy, i.e., the frozen actor LM and the initialized retrospective LM, in the tasks in the training sets for N trials and compute the ratings. We apply the standard RLHF pipeline to fine-tune the retro- spective model offline before evaluating the agent in the validation tasks. In online execution, we use best-of-n sampler, with the scores evaluated by the learned reward model from RLHF pipeline, as an alternative method of generating better retrospective responses in each trial. | 2308.02151#18 | 2308.02151#20 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#20 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | 7 # 7 EXPERIMENTS Extensive experiments are conducted to evaluate our method, including comparisons with ReAct and Reflexion performances, and visualization and discussion of agentâ s generated text and actions. 7.1 ENVIRONMENT AND DATASET We use open-source HotPotQA (Yang et al., 2018) environment, which consists of search-based question answering tasks to evaluate the agentâ s tool usage abilities under large state-action space. HotPotQA The agent is asked to solve a question answering task by searching in Wikipedia pages. At each time step, the agent is asked to choose from three action types or API calls: 1. | 2308.02151#19 | 2308.02151#21 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#21 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | SEARCH[ENTITY], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search. 2. LOOKUP[KEYWORD], which returns the next sentence containing keyword in the last passage successfully found by Search. 3. FINISH[ANSWER], which returns the answer and finishes the task. Dataset We collected 3,383 reflection samples by running the base rollout policy for 3 trials (N = 3) for 3,000 tasks in the training set, in which 1,084 instruction-response pairs have positive ratings. IMPLEMENTATION DETAILS Model We use GPT-3 (model: text-davinci-003) as the frozen actor model. For the retrospective model, we instantiate it from LongChat (model: longchat-7b-16k), which is a LM with 16k context length by fine-tuning llama-7b on instruction-following samples from ShareGPT. In all experiments, we set the temperature of actor LM as zero, i.e., T=0 and top p =1 to isolate the randomness of LM from the effects of reflections. We acknowledge that setting a higher temperature value can encourage exploration but it can obscure the impact of the proposed approaches, making it difficult to compare against existing baselines with T=0 (Yao et al., 2023; Shinn et al., 2023). Setup Our proposed learning framework is developed by using multiple open-source tools as fol- lows. We use the OpenAI connectors from langchain to build our actor models Ma. During in- ference of the retrospective model, we host an API server using FastChat and integrates it with langchain agents. The tool can host longchat-7b-16k with concurrent requests to speed up RL pol- icy rollouts. For fine-tuning the retrospective model, we develop our training pipeline with trl, which supports transformer reinforcement learning with PPO trainer. We present the details of the specific prompts we used and the full agent demonstrations and exam- ples for each environment in Appendix A. Training We fine-tune the retrospective model Mr with 4-bit quantized LoRA adapters (r=1) on the offline RL datasets with epochs=4; batch size=8; lr=1.4e-5. The number of trainable parameters is 0.53M (0.015% of llama-7b). | 2308.02151#20 | 2308.02151#22 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#22 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | We first run supervised fine-tuning trainer on the samples with positive ratings for 2 epochs and then the RLHF pipeline, including reward modeling, and RL fine- tuning with PPO, on the whole offline rating dataset using the default settings for llama-7b model. Evaluation Metrics We report the success rate and the average episode returns over validation tasks in an environment. The agent is evaluated on 100 validation tasks from the distractor dev split of open-source HotPotQA dataset. Baselines We experiment with two language agent baselines. | 2308.02151#21 | 2308.02151#23 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#23 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | â ¢ ReAct (Yao et al., 2023). This is the language agent architecture which AutoGPT (Gravitas, 2023) is built upon. At the same time, this agent does not learn from the environment rewards at all and can serve as a baseline for showing how the actor model performs without exploration and exploitation in the environment. 8 â ¢ Reflexion (Shinn et al., 2023). This is the state-of-the-art language agent architecture that the au- thors identify from literature so far. This agent enhances from verbal feedback of the environment, and does not use gradient signals explicitly. It can serve as a baseline for showing the effectiveness of gradient-based learning for reasoning and planning. 7.3 COMPARISONS WITH BASELINES We present the performances of our Retroformer in Table 2 and compare it against the baselines in Fig. 4. As shown in the table, we observe that the generated reflections consistently improve the agent performances over trials and the effects of reflection sampling are significant in the first few trials. This is expected as exploration in the early trials benefits exploration of the state action space. Table 2: Retroformer success rate (succ%) and average reward in N trials and best-of-n shots reflection sampling scored by the learned reward model in HotPotQA environment. The standard deviation is calculated for the average rewards over 100 tasks and n shots. Environment Settings 1 shot 2 shots 4 shots HotPotQA N=0 (initial trial) â â â â â â â - 34%/0.43 ± 0.46 â â â â â â â - 39%/0.45 ± 0.47 42%/0.48 ± 0.46 45%/0.52 ± 0.47 N=1 42%/0.48 ± 0.46 45%/0.52 ± 0.47 48%/0.54 ± 0.47 N=2 50%/0.55 ± 0.48 52%/0.58 ± 0.46 53%/0.60 ± 0.46 N=4 | 2308.02151#22 | 2308.02151#24 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#24 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Furthermore, as shown in Fig. 4, our agent outperforms the two strong base- lines. the results indicate that our reinforced model provides the language agents with better reflection re- sponses in early trials, which enables the agents to learn faster, while also achiev- ing better performances in the end. Our Retroformer agent achieves 53% suc- cess rate in 5 trials, which is better than the state-of-the-art 50% success rate reported in (Jang, 2023) that uses a much larger language model, i.e., GPT-3 (model: text- davinci-003) as the retrospective compo- nent. The results show the effectiveness of our policy gradient approach for fine- tuning the agent with offline samples. # HotPotQA (100 distractor tasks) â â â a a oo 05 F008 "episede iD 5 30 38 40 w ce) = & S 6 â e- Retroformer+4-shot sampling â e Reflexion ==- ReAct Success rate (%) w a Figure 4: We compare Retroformer with base- lines and observe consistent and faster performance im- provement in terms of success rate. 7.4 DISCUSSION OF THE REINFORCED AGENT BEHAVIORS We examine how the retrospective model Mr is improved with policy optimization by comparing the generated responses from the frozen retrospective LM and the ones from the fine-tuned, reinforced LM. We observe two general emerging behaviors that the fine-tuned LM learns to follow through trial and error. These two reinforced behaviors help the agents outperform the baselines in Fig. 4. Precise Credit Assignment for Failed Trials. As shown in Fig. 5, we gave an example that il- lustrates why the reinforced responses better analyze the mistakes made in the prior failed trial and enable the agent to prevent from repetitive errors in the next attempt. The agent failed in the last attempt mainly because it submits both â | 2308.02151#23 | 2308.02151#25 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#25 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Teen Tiansâ and â Teen Titans Goâ as the answer to the Teen Titans spin-off series, while the correct answer should only include â Teen Titans Goâ ; The agent forgot its original goal during a chain of thought, actions and lengthy observations. While the reflection response from the frozen model seems to have a lot â should haveâ terms and is very fluent in language, it simply reiterates the prior action sequences that lead to failure as the steps that the agent should have done, which prompts the agent to follow these steps again in the next attempt, resulting in an infinite loop. 9 Question Prior failed trial Reflection (Green: Retroformer; Brown: Frozen LM) Thought 1: | 2308.02151#24 | 2308.02151#26 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#26 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | I need to search Juliet Starling and find the Canadian-American actress who voiced her, then find the Teen Titans spinoff series she has done voice roles for. Action 1: Search{Juliet Starling] Observation 1: Could not find [Juliet Starling] 1 failed to find the Teen Titans spinoff series that Tara Strong has done voice roles for. Instead, I found Lollipop Chainsaw, a video game that Juliet Starling appeared in, I should have searched Tara Strong and Teen Titans or Teen Titans Go! specifically in my previous action to find the correct answer. : Juliet Starling is a character from Lollipop so I can search Lollipop Chainsaw. Lollipop Chainsaw featured Juliet Action 2: Search{Lollipop Chainsaw] Starling, who was voiced bya Observation 2: Lollipop Chainsaw is a Canadian-American actress who has done voice roles for what Teen Thought 3: Juliet Starling is voiced by Tara Strong, so I Titans spinoff series? need to search Tara Strong and find the Teen Titans spinoff series she has done voice roles for. Action 3: Search{Tara Strong] Observation 3: Tara Lyn Strong. My thought process was strong but my execution was flawed. I should have searched for Lollipop Chainsaw first and looked up the Canadian-American actress who voiced Juliet Starling afterwards. I also should have looked up Tara Strong's filmography and searched for any voice roles she did specifically for Teen Titans or Teen Titans Go!.Next trial: | 2308.02151#25 | 2308.02151#27 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#27 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Question: The novel was adapted into a film by a Thought 4: Tara Strong has voiced characters in Teen girector known for his work on another iconic 1980s franchise. What Titans and Teen Titans Gol, so the answer is Teen Titans 9, jg this film based on? Thought |: and Teen Titans Gol. Action 4: Finish[Teen Titans and Teen Titans Go!] Figure 5: Response refinement from the reinforced retrospective model. Note that the lengthy ob- servation step in the prior failed trial column is abbreviated for better presentation purposes. On the contrary, our reinforced response prompts the agent to focus on spinoff series in the very beginning of the response and asks the agent to find the answer in the previous actions and obser- vations with special attention on the search results of â | 2308.02151#26 | 2308.02151#28 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#28 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Tara Strongâ . In summary, our proposed reinforcement learning approach presents better credit assignment and root cause analysis abilities, and has the power to recommend real actionable insights for the next attempt. Frozen model response Reinforced model response I think the issue might have been with the search. I was not able to find the name "Miller" in the paragraph, which is why I was not able to find James Henry Miller's wife. I should have searched for "Miller's wife" instead of "James Henry Miller's wife" to avoid this confusion. I got stuck in a loop where I kept searching for the same person, Henry Miller, instead of Beatrice Sylvas Wickens. It's possible that I may have misheard or misunderstood the question. To avoid this, I should have written down the question and made sure I understood it correctly before beginning the search. Additionally, I should have tried searching for "James Henry Miller's wife" instead of just "James Henry Miller". | 2308.02151#27 | 2308.02151#29 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#29 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | New Plan: To avoid confusion, I will search for "Miller's wife" instead of "James Henry Miller's wifeâ and directly look for the nationality of the wife. Figure 6: Response refinement from the reinforced retrospective model with structured format. Structured Reflections and Action Plans. We discover one emergent behavior of the reinforced model that it can automatically paraphrase the original responses into two separate structured sec- tions, namely Reflection section and New plan: section, although not being explicitly trained or prompted for. One such example is shown in Fig. 6. The paraphrased response retrospects in the first paragraph and provides actionable insights next, while the response from the frozen LM in- terleaved both parts in one paragraph, making it hard to comprehend. We can also observer from Fig. 5 that the reinforced response removes the messy, irrelevant â Next trial:â content in the end for cleaner format, which may very likely result from LLM hallucination. # 8 CONCLUSION In this study, we present Retroformer, an elegant framework for iteratively improving large language agents by learning a plug-in retrospective model. This model, through the process of policy optimization, automatically refines the prompts provided to the language agent with environmental feedback. Through extensive evaluations on real-world datasets such as HotPotQA, the method has been proven to effectively improve the performances of large language agents over time both in terms of learning speed and final task completion performances. By considering the Actor LLM as a component of the environment, our policy gradient approach allows learning from arbitrary reward signals from diverse environments and tasks. This facilitates the iterative refinement of a specific component within the language agent architecture â the retro- spective model, in our case, while circumventing the need to access the Actor LLM parameters or propagate gradients through it. This agnostic characteristic renders Retroformer a concise and adaptable plug-in module for different types of cloud-hosted LLMs, such as GPT and Bard. Fur- thermore, our approach is not limited to enhancing the retrospective model alone; it can be applied | 2308.02151#28 | 2308.02151#30 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#30 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | 10 to reinforce other components within the agent architecture, such as the memory and summarization module, or the actor prompt. By selectively focusing on the component to be fine-tuned while keep- ing the remainder fixed, our proposed policy gradient approach allows for iterative improvements of the component with reward signals obtained from the environment. # REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. | 2308.02151#29 | 2308.02151#31 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#31 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. # Harrison Chase. Langchain. https://github.com/hwchase17/langchain, 2023. Significant Gravitas. Auto-GPT, 2023. Autogpt. https://github.com/Significant-Gravitas/ Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. | 2308.02151#30 | 2308.02151#32 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#32 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023. Eric Jang. Can llms critique and iterate on their own outputs? evjang.com, Mar 2023. URL https://evjang.com/2023/03/26/self-reflection.html. Aman Madaan, Alexander Shypula, Uri Alon, Milad Hashemi, Parthasarathy Ranganathan, Yiming Yang, Graham Neubig, and Amir Yazdanbakhsh. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023a. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023b. Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783, 2016. | 2308.02151#31 | 2308.02151#33 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#33 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | # Yohei Nakajima. Babyagi. https://github.com/yoheinakajima/babyagi, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â 27744, 2022. Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. | 2308.02151#32 | 2308.02151#34 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#34 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. | 2308.02151#33 | 2308.02151#35 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#35 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. 11 Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. | 2308.02151#34 | 2308.02151#36 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#36 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning. arXiv preprint arXiv:2206.11871, 2022. R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. | 2308.02151#35 | 2308.02151#37 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#37 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, volume 12, pp. 1057â 1063. MIT Press, 2000. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd. html. Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, and Nathan Lambert. Trl: Transformer reinforcement learning. https://github.com/lvwerra/trl, 2020. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question In Conference on Empirical Methods in Natural Language Processing (EMNLP), answering. 2018. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. | 2308.02151#36 | 2308.02151#38 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#38 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | 12 Appendix for # â Retroformer: Retrospective Large Language Agents with Policy Gradi- ent Optimizationâ A FULL EXAMPLES A.1 ACTOR PROMPT ENGINEERING An example of the HotPotQA actor language model prompt is shown below. Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, and Action can be three types: (1) Search[entity], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search. (2) Lookup[keyword], which returns the next sentence containing the keyword in the last passage successfully found by Search. (3) Finish[answer], which returns the answer and finishes the task You may take as many steps as necessary. You have attempted to answer the following question before and failed. The following reflection(s) give a plan to avoid failing to answer the question in the same way you did previously. Use them to improve your strategy of correctly answering the given question. Reflections: - I should have searched for the title of the book instead of the author. I also didn't check for the proverb which was not the question. - I should have searched for "It Takes a Family by Rick Santorum" instead of just "It Takes a Family" to get the correct answer. - My search was not concise and I did not specify what I was looking for. I also did not check similar results. In this case, I should have searched for 'It Takes a Family by Rick Santorum' to find the correct answer. Question: It Takes a Family is a response to this 1996 book that was published by who Thought 1: I need to search for the title of the book, "It Takes a Family", and find who published it. Action 1: Search["It Takes a Family"] Observation 1: | 2308.02151#37 | 2308.02151#39 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#39 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Richard John Santorum ( san-TOR-om; born May 10, 1958) is an American politician, attorney, author, and political commentator who represented Pennsylvania in the United States Senate from 1995 to 2007. He was the Senate's third-ranking Republican during the final six years of his tenure. He also ran unsuccessfully for President of the United States in the 2012 Republican primaries, finishing second to Mitt Romney.Santorum was elected to the United States Senate from Pennsylvania in 1994, He served two terms until losing his 2006 reelection bid. A Roman Catholic, Santorum is a social conservative who opposes abortion and same-sex marriage and embraced a cultural warrior image during his Senate tenure. While serving as a senator, Santorum authored the Santorum Amendment, which would have promoted the teaching of intelligent design. He was a leading sponsor of the 2003 federal law known as the Partial-Birth Abortion Ban Act.In the years following his departure from the Senate, Santorum has worked as a consultant, private practice lawyer, and news contributor. He ran for the Republican nomination in the 2012 U.S presidential election. Before suspending his campaign on April 10, 2012, Santorum exceeded expectations by winning 11 primaries and caucuses and receiving nearly four million votes, making him the runner-up to eventual nominee Mitt Romney. Santorum ran for president again in 2016, but ended his campaign in February 2016 after a poor showing in the Iowa caucuses. In January 2017 he became a CNN senior political commentator. However, he was terminated from his contract with CNN in May 2021 due to comments he made about Native Americans a few weeks prior which were deemed "dismissive". | 2308.02151#38 | 2308.02151#40 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#40 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Thought 2: The paragraph mentions Rick Santorum, so I need to search "It Takes a Family by Rick Santorum". Action 2: Search["It Takes a Family by Rick Santorum") Observation 2: It Takes a Family is a 2005 book by then Pennsylvania Senator Rick Santorum. The title is a response to the 1996 book It Takes a Village by then-First Lady Hillary Clinton. In the book, Santorum states that the family structure is necessary. He argues that liberal social policies have devastated the family structure and that prevailing socially liberal attitudes have systematically devalued families across the board. He argues that the government should take a proactive role in promoting strong families. | 2308.02151#39 | 2308.02151#41 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#41 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Thought 3: It Takes a Family was published by Rick Santorum, so the answer is Rick Santorum. Action 3: Finish[Rick Santorum] Observation 3: Answer is INCORRECT 13 A.2 RETROSPECTIVE PROMPT ENGINEERING # An example of the HotPotQA retrospective instruction prompt is shown below. You are an advanced reasoning agent that can improve based on self reflection. You will be given a previous reasoning trial in which you were given access to an Docstore API environment and a question to answer. You were unsuccessful in answering the question either because you guessed the wrong answer with Finish[<answer>], or you used up your set number of reasoning steps. In a few sentences, Diagnose a possible reason for failure and devise a new, concise, high level plan that aims to mitigate the same failure. Use complete sentences. Here are some examples. # [Few-Shot Demonstrations] # Previous trial: Question: The 2016 Washington State Cougars were led by the coach who previously helmed which other team? Thought: I need to search Washington State Cougars and find the coach of the 2016 team, then find which other team the coach previously helmed. Action: Search[Washington State Cougars] Observation: The Washington State Cougars (known informally as the Cougs) are the athletic teams that represent Washington State University. Located in Pullman, Washington, WSU is a member of the Pac-12 Conference in NCAA Division I. The athletic program comprises ten women's sports and six men's intercollegiate sports, and also offers various intramural sports. | 2308.02151#40 | 2308.02151#42 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#42 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Thought: The paragraph does not mention the coach of the 2016 team. I need to look up "coach" to find the coach of the 2016 team. # Action: # Lookup[coach] Observation: (Result 2/5) During the 2001-03 seasons, the Cougar football teams were distinguished by three ten-win seasons, three top ten poll rankings, and appearances in the Sun, Rose, and Holiday Bowls. The Cougars shared the Pac-10 title in 2002. Alumnus Paul Wulff, WSU's 3lst head coach, was fired in late November 2011, after compiling a 9-40 (.184) record in four seasons. Mike Leach coached the team from 2012 to 2019, guiding them to six bowl games. Leach left to coach Mississippi State in 2020 and was replaced by Hawaii head coach Nick Rolovich. Rolovitch was fired in October 2021 for not complying with the state's COVID-19 vaccine mandate. Defensive coordinator Jake Dickert was named the interim head coach. | 2308.02151#41 | 2308.02151#43 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#43 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Thought: The 2016 Washington State Cougars were led by Mike Leach. I need to search Mike Leach and find which other team he previously helmed. # Action: # Search[Mike # Leach] Observation: Michael Charles Leach (March 9, 1961 - December 12, 2022) was an American college football coach who primarily coached at the NCAA Division I FBS level. He was a two-time national coach of the year, three-time conference coach of the year and the mastermind behind the NCAA record-setting air raid offense. He was the head coach at Texas Tech University from 2000 to 2009, where he became the winningest coach in school history. After Texas Tech, he coached at Washington State University from 2012 to 2019, where he recorded the third-most wins of any coach in school history. He then coached at Mississippi State from 2020 until his death in 2022.Leach was known for directing offenses using lots of passing to several receivers, in a spread system known as the air raid, which Leach developed with Hal Mumme when Mumme was head coach and Leach was offensive coordinator at Iowa Wesleyan, Valdosta State, and Kentucky in the 1990s. Leach's offenses with Mumme, and later as a head coach himself, have broken numerous school and NCAA records. On eighteen occasions, Leach led his unranked team to victory over a team ranked in the AP poll, which is the most in the AP era. | 2308.02151#42 | 2308.02151#44 | 2308.02151 | [
"2303.17580"
]
|
2308.02151#44 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Thought: Mike Leach previously helmed Texas Tech University and Kentucky. So the answer is Texas Tech University and Kentucky. # Action: # Finish[Texas # Tech # University # and # Kentucky] # Observation: # Answer is # INCORRECT Reflection: 14 | 2308.02151#43 | 2308.02151 | [
"2303.17580"
]
|
|
2308.01552#0 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | 3 2 0 2 g u A 3 ] I A . s c [ 1 v 2 5 5 1 0 . 8 0 3 2 : v i X r a # InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent Po-Lin Chen and Cheng-Shang Chang, Fellow, IEEE Abstractâ This research paper delves into the integration of OpenAIâ s ChatGPT into embodied agent systems, evaluating its influence on interactive decision-making benchmark. Drawing a parallel to the concept of people assuming roles according to their unique strengths, we introduce InterAct. In this approach, we feed ChatGPT with varied prompts, assigning it a numerous roles like a checker and a sorter, then integrating them with the original language model. Our research shows a remarkable success rate of 98% in AlfWorld, which consists of 6 different tasks in a simulated household environment, emphasizing the significance of proficient prompt engineering. The results high- light ChatGPTâ s competence in comprehending and performing intricate tasks effectively in real-world settings, thus paving the way for further advancements in task planning. Keywords: ChatGPT, AlfWorld, Task planning, InterAct. # I. INTRODUCTION | 2308.01552#1 | 2308.01552 | [
"2206.07682"
]
|
|
2308.01552#1 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | The advent of large language models (LLMs), underpinned by transformative advancements in natural language process- ing (NLP), has stimulated a revolution across a wide range of applications. Exemplified by models such as Transformer [1], T5 [2], GPT-4 [3], these language models have achieved impressive results in diverse tasks like paragraph summary, language translation, and code optimization. These achieve- ments can be attributed to their ability to absorb and process massive amounts of data, making sense of the patterns and structures within the text. ChatGPT [4] is an AI language model created by OpenAI, which has been trained using a combination of pretraining and fine-tuning with human feedback. This advanced model is built on Transformer model, enabling it to produce responses that closely resemble human language. By undergoing exten- sive training on vast volumes of text data, ChatGPT excels in understanding and generating text in various languages and fields, answering queries, and engaging in dialogues. Unlike its predecessors that operate primarily based on a single prompt, ChatGPT combines text generation with code syn- thesis, thereby significantly enhancing its interactive abilities. In this paper, we assess the ability of ChatGPT to make decisions within the context of an AlfWorld simulated envi- ronment [5]. The aim is to understand the modelâ s proficiency in absorbing and processing data to make rational decisions. Scholarly works such as ReAct [6] and Reflexion [7] showcase the decision-making, action-initiation, and reflective powers of LLMs, paving the way for remarkable progress in a range of text-based performance metrics. | 2308.01552#0 | 2308.01552#2 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#2 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | However, they all utilize a single language model (InstructGPT) which, despite numerous iterations of thought and reflection, often repeatedly commits the same mistakes. In this research, we devise a novel model, InterAct, which is founded on the architecture of the ReAct model [6]. It undergoes alterations in prompt formulations, incorporates different ChatGPT for support. In particular, we add a checker module to tackle the issue of object misidentifi- cation. The initial basic prompt has also been revised to bolster InterActâ s capabilities in constructing comprehensive search paths. This approach effectively addresses the previously men- tioned shortcomings of the ReAct model. Consequently, this approach yielded a success rate of 98% in this benchmark, a significant improvement from the base ReAct agentâ s accuracy of 75%. | 2308.01552#1 | 2308.01552#3 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#3 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | These experiments provide critical insights into the potential benefits and limitations of implementing ChatGPT in AI-driven systems and technologies. In conclusion, the main insight of the paper is the advance- ment of AI language models like ChatGPT presents an excit- ing opportunity to revolutionize and reshape our interaction with technology. By leveraging these models, we can build more intuitive, responsive, and smart technologies that can effectively understand and respond to human requirements. The key contributions of our research are summarized below: (1) We introduce InterAct, an improved method where each agent, like ChatGPT, can showcase unique abilities, adeptly rectifying the limitations found in the ReAct model, such as object misidentification and inefficient planning. (2) We have designed new trajectory prompts that enable the agent to flawlessly locate items during its search process. In a decision-making test within the AlfWorld sim- ulated environment, InterAct demonstrated a 98% success rate, significantly higher than the 75% accu- racy of the base ReAct agent, suggesting its potential benefits in AI-centric systems and technologies. | 2308.01552#2 | 2308.01552#4 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#4 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | The authors are with the Institute of Communications Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan R.O.C. Email: [email protected]; [email protected]. This work was supported in part by the National Science and Technology, Taiwan, under Grant 111-2221-E-007-045-MY3, and in part by Qualcomm Technologies under Grant SOW NAT-487844-2. # II. RELATED WORK Transformers have emerged as the dominant architecture in various fields. Initially prominent they have now extended their influence to include vision-based tasks [11], [12] and even reinforcement learning [13], [14]. In the realm of robotics, Transformers have found practical applications in diverse areas such as path planning [15], [16], object recognition [17], and grasping [18]. One notable example is RT-1 [19], which takes the uti- lization of Transformers that takes images from a robotâ s camera and natural language task instructions as inputs and directly outputs tokenized actions. RT-1 can also acquire new skills by observing other robotsâ experiences, opening opportunities for enhanced robot capabilities through multi- robot datasets. Another instance is SayCan [20], a study conducted by Googleâ s AI team and Everyday Robots. This research employs PaLM [21] and an affordance function to empower robots to carry out complex tasks based on natural language instructions. The resulting system, PaLM-SayCan, transforms user instructions into actionable plans for the robot. Inner Monologue [22] has made further advancements by incorporating injected feedback from the environment. The work in [23] demonstrated that even without any training, siz- able language models can be effectively prompted to produce credible action plans driven by goals. They also suggested multiple techniques to enhance the modelâ s ability to generate executable outputs, all without the need for invasive probing or modifications to the underlying model. GPT for Robotics Moreover, recent publications, includ- ing [24], [25], and [26], have successfully incorporated models such as ChatGPT and GPT3.5 into the realm of robotics ap- plications. These advancements facilitate interaction between the models and the environment or users, allowing for the correction of the robotâ | 2308.01552#3 | 2308.01552#5 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#5 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | s behavior. These papers showcase various prompts and outline a pipeline for the implementation they conduct of ChatGPT in robotics tasks. Additionally, experimental evaluations to assess ChatGPTâ s capability to execute a wide range of robotics tasks while striving to bridge the gap between natural language and actionable robot actions. The process of reasoning in robotics involves breaking down complex tasks into simpler subtasks that can be more easily solved by the LLM itself or with the aid of tools. Various approaches [27], [28] have been introduced to enable natural language agents to select their next action in text-based environments. One prominent approach is Chain-of-thought (CoT) reason- ing, as proposed in [29]. This approach leverages emergent properties, such as reasoning and commonsense, to solve tasks through multiple steps. It enables the LLM to reason through a series of intermediate actions, leading to the desired outcome. Another approach called faithful reasoning, introduced in [30], decomposes multi-step reasoning into three distinct steps, each handled by a dedicated LLM. By dividing the task into these steps, faithful reasoning facilitates the LLMâ s ability to tackle complex computations effectively. Similar approaches like Scratchpad [31], which involves fine-tuning an LLM on intermediate computation steps, resulting in improved performance on multi-step computation problems. The Describe, Explain, Plan, and Select (DEPS) approach, introduced in [32], specifically developed to tackle the unique challenges of planning in open-ended environments such as Minecraft. This innovative system adeptly manages intricate tasks that demand meticulous, multi-step reasoning, effec- tively prioritizing sub-goals according to the agentâ | 2308.01552#4 | 2308.01552#6 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#6 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | s prox- imity. Notably, DEPS has exhibited remarkable results in enhancing the success rate of Minecraft tasks by offering insightful explanations for errors encountered during sub-task execution. As a groundbreaking planning agent, DEPS has achieved an unprecedented positive success rate in conquering the formidable ObtainDiamond task, marking a significant milestone in the field. A different strategy called DERA [33] presents an alter- native approach by structuring a dialogue as a conversation between two agent types: â Researcherâ and â Decider.â The Researcher agent analyzes information and identifies key com- ponents of the problem, while the Decider agent autonomously combines the Researcherâ s insights and makes judgments on the final output. This approach has demonstrated notable enhancements compared to the baseline performance of GPT- 4 [3] in evaluations conducted by human experts and quanti- tative metrics. Particularly, DERA has showcased significant advancements in safety-critical domains like healthcare. Additionally, the studies by [7], [34] have also incorporated reflection actions into the model. These reflection actions allow the model to refine its actions based on feedback received during the execution of tasks. By iteratively adjusting its actions and incorporating self-feedback, the model can improve its decision-making process and adapt to changing conditions. Our research aims to provide additional evidence supporting the effectiveness of ChatGPT in language-conditioned robotic learning simultaneously introducing novel architectures that facilitate reasoning through the coordination of various roles performed by LLMs. | 2308.01552#5 | 2308.01552#7 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#7 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | # III. METHOD: INTERACT STRUCTURE In this section, we use the AlfWorld benchmark to test ChatGPTâ s reasoning capabilities, examining how it accom- plishes household tasks step by step when provided only with a few-shot example. We will use not only ChatGPT but also a similar language model called InstructGPT (text-davinci- 002). InstructGPT is particularly adept at tasks demanding succinct responses or benefiting from k-shot examples. In this particular task, unlike the previous demostration, the model is required to integrate task-oriented actions with verbal reasoning. The model needs to possess the ability to think and reason like a human. When faced with dead ends , the model should be capable of adjusting its planning based on logical reasoning. | 2308.01552#6 | 2308.01552#8 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#8 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | A. AlfWorld Dataset AlfWorld is a suite of text-based environments that chal- to solve multi-step tasks in a variety of lenge an agent interactive environments with ALFRED [35] benchmark. The ALFRED benchmark focuses on tasks that require an agent to accomplish high-level goals in a simulated household environment by navigating and interacting through text-based actions. In AlfWorld, there are six types of tasks that chal- lenge the agentâ s ability to plan, track subgoals, and explore systematically. For example, a task in AlfWorld could be to â | 2308.01552#7 | 2308.01552#9 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#9 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | examine a paper under a desklamp.â To achieve this goal, the agent needs to navigate to specific locations within the simulated household and interact with objects using text commands. The agent might need to issue commands like â go to coffeetable 1,â â take paper 2,â and â use desklamp 1â to complete the task. The complexity of the tasks in AlfWorld is intentionally designed to be challenging. Task instances can have more than 50 locations and may require an expert policy more than 50 steps to solve. This complexity encourages the agent to effectively plan its actions, keep track of subgoals, and explore the environment systematically. For example, the agent may need to check all desks one by one to find the desklamp. One of the challenges presented in AlfWorld is the need to determine likely locations for common household items. For instance, a desklamp is likely to be found on desks, shelves, or dressers. This aspect of the environment provides an opportunity for language models like LLMs to leverage their pretrained commonsense knowledge to make informed decisions about the likely locations of objects. In each environment of AlfWorld, the agent has the option to select an action from a list of permissible actions, denoted as At at time step t. Upon executing an action, the agent receives an observation, Ot, and a reward, R(st, at), from the environment, which then determines the next state of the agent. AlfWorld offers a diverse set of six tasks and a total of over 3000 unique environments. These environments test the agentâ s ability to understand the task at hand, formulate a sequential plan consisting of subtasks, and carry out the necessary actions within the given environment. In our trials, we utilize the ReAct problem-solving strategy [6], which has demonstrated superior performance across a wide array of sequential decision-making tasks. ReAct is a strategy that allows the agent to reason and act by articulating its current thoughts and performing actions based on these thoughts. At each time step, the agent has the option to execute < think >: thought action to verbalize its internal thought process, or < action >: to induce a response from the environment. The set of possible actions in each state is not explicitly defined, providing the agent with full autonomy in determining its next moves. To prevent syntactic errors, we provide the agent with two domain-specific few-shot trajectories. | 2308.01552#8 | 2308.01552#10 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#10 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | # B. Model architecture We introduced a novel model called InterAct, which is built upon the foundation of ReAct. The architectural dia- gram of InterAct can be observed in Figure 1. While ReAct has demonstrated impressive accuracy in diverse decision- making and knowledge-intensive tasks, it occasionally en- counters common errors, including Perception Error, Object Misidentification, and Inefficient Planning. In simpler terms, although ReAct achieves state-of-the-art performance overall, there exists a small subset of tasks that remain unsolved due to minor imperfections in a single model. H= H= Hom} Sorter + (chatser) InterAct Query : un k > Environment : | 2308.01552#9 | 2308.01552#11 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#11 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Reward â â â â â â â â _ Action - 4 ReAct â ) a checker {chatePr) Fig. 1. The architecture of both ReAct and InterAct. InterAct involves the integration of LLM with various agents to facilitate smoother interaction with the environment. To address these challenges, InterAct leverages the com- bined strength of agents with distinct purposes, such as is checker and sorter, susceptible to errors. In addition, we have modified the original basic prompt to enhance InterActâ s ability to plan comprehensive search paths when looking for multiple items, ensuring that no possible locations are overlooked. This op- timization greatly improves the efficiency of the tasks being performed. Sorter When processing environmental data, ReAct ini- tially needs to determine the likelihood of objects appearing in specific locations. However, this ranking process often falls short, leading to less efficient planning. This inefficiency may arise from the fact that the the InstructGPT model (text- davinci-002) is not sufficiently trained in factual knowledge and common-sense reasoning. On the other hand, ChatGPT has been fine-tuned using Reinforcement Learning with Hu- man Feedback (RLHF) and has demonstrated a more nuanced understanding of various situations. It excels at making well- informed decisions, as depicted in Figure 2. To improve the ef- ficiency of predicting object locations, we integrate ChatGPT as a decision-making component. Whenever ReAct requires this procedure, it can autonomously utilize ChatGPT, thus enhancing the effectiveness of its object search operations. | 2308.01552#10 | 2308.01552#12 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#12 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Text-davinel-002 Chater The Ts he Fs need to finda cath, Alot re ey to appear tarbagecan (1), sinkbasin (1-2), countertop (1), tole (1) towelholer (1) bs3: sinkbasio oder 3," Tarot 2,9 "countertop 1,3 tletpa â Act 3: goto garbagecan 1 saroagecon 1 â bs 3: On the garbagecan 1, yousee nothing. â Act: go to snkbasin 2 â Act: thinks Based on the sorting provide can check cabinets (1-4), towelholder (0b 4: nthe sinkbatn 3, you see nating. (1) sinkbasn (1-2), countertop 1), toletpaperhanger (1), toll (1), garbagecan (1) â one by one to find cloth, starting with cabinet 3 Act goto fowelholder 054 0. (0bs 8: On the towelholdr 1, you see a towel. â et 5: goto cabinet Act: think |e finda eat need to get one from somewhere ele. (Obs 6: The ebinet 1s open Init, you see a loth 2 Fig. 2. The left image was generated using text-davinci-002 for search ranking, while the right image was generated using ChatGPT. It can be observed that ChatGPT exhibits higher logical reasoning in finding objects compared to text-davinci-002. Checker Another issue with text-davinci-002 is that it tends to mistakenly categorize similar objects as the same. For example, it might treat a pan and a pot as identical items, leading to the problem of Object Misidentification, as depicted in Figure 3. | 2308.01552#11 | 2308.01552#13 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#13 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | To address this issue, we employ ChatGPT as a checker by providing it with appropriate prompts. We have observed that ChatGPT can successfully distinguish between similar objects. Furthermore, we utilize the results from this checker as observations and feed them back to the LLM, as illustrated in Figure 1. This approach helps us resolve the problem related to object misidentification. pot 1. Next, I need to take it. Fig. 3. Object Misidentification. In this scenario, the objective is to locate a pan; however, ReAct mistakenly misidentifies another object as the pan. In the AlfWorld environment, we encountered a mission type named â | 2308.01552#12 | 2308.01552#14 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#14 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | pick 2â , where the agent is required to find two identical objects. We observed that ReAct alone tends to forget its previous locations, resulting in inefficient trajectories characterized by frequent revisits to the same place. In some instances, this led to hallucinations, defined as consecutive identical actions with the environment responding similarly. To address this issue, we made changes to the original modelâ s prompt. After finding the first object and placing it in the corresponding receptacle, we allow the model to autonomously generate a trajectory while ensuring that this path does not overlook areas where the second object might be present, as shown in Figure 4. More details about prompts, we refer the reader to Ap- pendix A. # IV. EVALUATION In this section, we present a comparative analysis of the performance enhancement provided by the helpers (sorter or checker) and the new trajectory planning when compared to the baseline model. Our findings demonstrate that InterAct consistently outperforms ReAct on AlfWorld (as shown in Table I) across all tasks. On AlfWorld, the top-performing InterAct trial achieves an impressive average success rate of 98%, falling short in only 2 out of 134 tasks. This performance is significantly better than the best trials of ReAct (73%) and BUTLER (37%). Indeed, InterAct has demonstrated excep- tional proficiency in handling these tasks, as evidenced by achieving a 100% success rate in four out of the six tasks. This performance showcases InterActâ s remarkable ability to effectively manage and succeed in various tasks. Notably, even when ReAct is augmented only with a checker or sorter, the overall average performance surpasses that of ReAct without helpers by a significant margin. The tasks that show the most substantial improvement are â | 2308.01552#13 | 2308.01552#15 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#15 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | pick2â and â clean,â with an approximate gain of 47% and 41%. From a qualitative standpoint, we observed that ReAct, without any helper, faces difficulties in accurately determining the presence of items in a specific location or employing ineffective search strategies. Your task is to: find two pillow and put them in sofa. â Act 2: think: Fist I need to find the first [pllow]]-A pillow is more likely to appear in armehair (2), cabinet (1-4), \ drawer (1-5), dresser (1), garbagecan (1), safe (1), shelf (1-12), sidetable (1), sofa (1). | can check one by one, starting with armchair 3. Obs 2: OX Act 3: goto armchair bs 3: On the armchair 3, you see a pillow 2, a pillow 1, and a remotecontral 1. Act 4: think: Now | find the first pillow (1). Next, | need to take it. Obs 4: 0x â Ac 5: tak pillow 1 from armchair 2 â Obs 5: You pick up the pillow 1 from the armchair 1. Act 6: think: Now | take the first pillow (1). Next, | need to put it in/on sofa 1. Obs 6: OK Act 7: goto sofa 1 Obs 7: On the sofa 1, you see nothing. â Act 8: put pillow 1 in/on sofa 1 Obs 8: You put the pillow 1 in/on the sofa 1. Act 9: think: Now | put the first pillow in sofa. Next, | need to find the second [[pillow]]}. | have visited armchair (1), so the second pillow is more likely to appear in cabinet (1-4), drawer (1-5), dresser (1), garbagecan (1), safe (1), shelf (1-12), sidetable (1), sofa (1). Ican check one by one, starting with cabinet 1 J) Obs 9: | 2308.01552#14 | 2308.01552#16 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#16 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | OK. Your task is to: find two pillow and put them in sofa. (Obs 6: On the armchair 1, you see a cellphone 2, a pillow 2, and a pillow 1. â Act 7: think: Now I find the first pillow (1). Next, | need to take it. Obs 7: OK. Act 8: take pillow 1 from armchair 1 Obs 8: You pick up the pillow 1 from the armchair 1 â Act 9: think: Now | take the first pillow (1). Next, I need to put it infon sofa 1 (Obs 9: | 2308.01552#15 | 2308.01552#17 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#17 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | OK. Act 10: goto sofa 1 Obs 10: On the sofa 1, you see a newspaper 3 â Act 11: put pillow 1 in/on sofa 1 (Obs 11: You put the pillow 1 in/on the sofa 1 Act 12: think: Now I put the first pillow in sofa. Next, I need to find the second {[pillow]]. | should go back to armchair 1 to see if there is more pillow. If not, | will continue to check drawer (1-5), sidetable (1), cabinet (1-4), dresser (1), shelf (1-12), garbagecan (1), safe (1) one by one to find pillow. | 2308.01552#16 | 2308.01552#18 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#18 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Obs 12: OK â Act 13: go to armchair 1 (bs 13: On the armchair 1, you see a cellphone 2, and a pillow 2. â Act 14: think: Now I find the second pillow (2). Next, I need to take It Obs 14: OK (Obs 18: You put the pillow 2 in/on the sofa 1. Fig. 4. Trajectory planning. In the initial scenario, the agent fails to retrieve the second pillow from the armchair after placing the first pillow on the sofa. Consequently, the agent cannot find the second pillow, resulting in an incomplete task. In the revised scenario, InterAct addresses this issue by considering the future search trajectory. It prioritizes returning to the armchair to search for the second pillow before exploring the other areas. This approach improves the chances of successfully locating the second pillow and completing the task. TABLE I ALFWORLD TASK-SPECIFIC SUCCESS RATES (%). Method Pick Clean Heat Cool Look Pick2 BUTLERg 33 6 70 76 17 12 BUTLER 65 39 83 76 55 24 Act 88 41 76 67 73 43 ReAct 88 55 90 81 75 53 ReAct+checker 85 81 100 87 92 75 ReAct+sorter 84 76 88 73 80 67 InterAct 100 96 100 94 100 100 All 46 57 46 73 86 78 98 # V. | 2308.01552#17 | 2308.01552#19 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#19 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | DISCUSSION AND LIMITATIONS A. Scalability of InterAct Our InterAct model is scalable and adaptable to different datasets and scenarios. For instance, if thereâ s a need for a feature similar to â memories,â we can develop an interpreter to describe the current path, among other things, without having to train numerous different language models. This is possible because ChatGPT serves as an excellent backbone for such extensions. B. Error assessment with a supervisor module Despite achieving an impressive average performance of 98% on the AlfWorld dataset, our analysis of failed trajecto- ries uncovered certain limitations. One notable drawback is the modelâ s heavy reliance on prompt completeness within InterAct. When our examples contain missing or unaddressed components, the model fails to detect these errors, resulting in repetitive actions, even for trivial mistakes. To overcome this issue, we explored the possibility of using an alternative ChatGPT model as a supervisor to identify such errors. | 2308.01552#18 | 2308.01552#20 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#20 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | How- ever, itâ s important to acknowledge that the accuracy of the supervisorâ s judgment cannot be guaranteed, and there may be occasional misidentifications leading to â action errors.â In order to tackle the challenge of error detection, we conducted a comparison between ChatGPT and GPT-4. The results demonstrated a significant improvement in error de- tection performance with GPT-4. Unfortunately, GPT-4 is currently unavailable as an open-source model and cannot be accessed free of charge. Conducting extensive simulations using GPT-4 requires funding support. | 2308.01552#19 | 2308.01552#21 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#21 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | C. Insufficiency of the dataset While AlfWorld is a valuable platform for assessing AI per- formance, it has certain limitations. Primarily, it encompasses only six types of tasks, and even within these categories, the task quantity is quite limited. These restrictions neither fully test nor make optimal use of the AI systemsâ capabilities. If we move to an environment offering a larger range and diversity of tasks, as well as a broader and more varied set of locations, our model will still need improvement to maintain its current level of accuracy. This aspect will be our focus for future research. | 2308.01552#20 | 2308.01552#22 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#22 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | # VI. CONCLUSION Our research is centered on enhancing the task planning capabilities of large language models. We developed a new model, InterAct, built upon the framework of the ReAct model. InterAct is a culmination of various â helpersâ (like checkers and sorters) and aims to improve upon the existing trajectory. We evaluated this framework in the AlfWorld sim- ulated environment, where it showed a substantial increase in decision-making accuracy, soaring from 75% to an impressive 98%. This highlights the vast potential of these models in AI- driven systems and technologies. In essence, this study underscores the revolutionary poten- tial of AI language models like ChatGPT and their pivotal role in shaping future real-world interactions. As we continue to delve into their capabilities, we are on the cusp of a new technological era marked by not only intelligence but also intuitiveness and responsiveness to human needs. | 2308.01552#21 | 2308.01552#23 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#23 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | # REFERENCES [1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å . Kaiser, and I. Polosukhin, â Attention is all you need,â Advances in Neural Information Processing Systems, vol. 30, 2017. [2] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, â | 2308.01552#22 | 2308.01552#24 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#24 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Exploring the limits of transfer learning with a unified text-to-text transformer,â The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485â 5551, 2020. [3] OpenAI, â GPT-4 technical report,â arXiv, 2023. [4] â OpenAI, ChatGPT. https://chat.openai.com.â 2023. [5] M. Shridhar, X. Yuan, M.-A. | 2308.01552#23 | 2308.01552#25 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#25 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | CË ot´e, Y. Bisk, A. Trischler, and M. Hausknecht, â Alfworld: Aligning text and embodied environments for interactive learning,â arXiv preprint arXiv:2010.03768, 2020. [6] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. | 2308.01552#24 | 2308.01552#26 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#26 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Cao, â ReAct: Synergizing reasoning and acting in language models,â arXiv preprint arXiv:2210.03629, 2022. [7] N. Shinn, B. Labash, and A. Gopinath, â Reflexion: an autonomous agent with dynamic memory and self-reflection,â arXiv preprint arXiv:2303.11366, 2023. [8] W. Fedus, B. Zoph, and N. Shazeer, â | 2308.01552#25 | 2308.01552#27 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#27 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity,â The Journal of Machine Learning Research, vol. 23, no. 1, pp. 5232â 5270, 2022. [9] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., â Language models learners,â Advances in Neural Information Processing are few-shot systems, vol. 33, pp. 1877â | 2308.01552#26 | 2308.01552#28 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#28 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | 1901, 2020. [10] L. Zhuang, L. Wayne, S. Ya, and Z. Jun, â A robustly optimized bert pre-training approach with post-training,â in Proceedings of the 20th Chinese national conference on computational linguistics, 2021, pp. 1218â 1227. [11] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, â | 2308.01552#27 | 2308.01552#29 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#29 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Swin transformer: Hierarchical vision transformer using shifted win- dows,â in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012â 10 022. [12] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, â Swinir: Image restoration using swin transformer,â in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 1833â | 2308.01552#28 | 2308.01552#30 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#30 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | 1844. [13] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch, â Decision transformer: Re- learning via sequence modeling,â Advances in Neural inforcement Information Processing systems, vol. 34, pp. 15 084â 15 097, 2021. [14] K.-H. Lee, O. Nachum, M. S. Yang, L. Lee, D. Freeman, S. Guadarrama, I. Fischer, W. Xu, E. Jang, H. Michalewski et al., â | 2308.01552#29 | 2308.01552#31 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#31 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Multi-game decision transformers,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 921â 27 936, 2022. [15] K. Alexis, C. Papachristos, R. Siegwart, and A. Tzes, â Uniform cov- erage structural inspection pathâ planning for micro aerial vehicles,â in 2015 IEEE international symposium on intelligent control (ISIC). IEEE, 2015, pp. 59â 64. [16] D. S. Chaplot, D. Pathak, and J. Malik, â | 2308.01552#30 | 2308.01552#32 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#32 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Differentiable spatial planning using transformers,â in International Conference on Machine Learning. PMLR, 2021, pp. 1484â 1495. [17] K. He, X. Chen, S. Xie, Y. Li, P. Doll´ar, and R. Girshick, â Masked autoencoders are scalable vision learners,â in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000â | 2308.01552#31 | 2308.01552#33 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#33 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | 16 009. [18] D. Park and S. Y. Chun, â Classification based grasp detection using spatial transformer network,â arXiv preprint arXiv:1803.01356, 2018. [19] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu et al., â Rt-1: Robotics transformer for real-world control at scale,â arXiv preprint arXiv:2212.06817, 2022. [20] M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog et al., â | 2308.01552#32 | 2308.01552#34 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#34 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Do as i can, not as i say: Grounding language in robotic affordances,â arXiv preprint arXiv:2204.01691, 2022. [21] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., â | 2308.01552#33 | 2308.01552#35 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#35 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Palm: Scaling language modeling with pathways,â arXiv preprint arXiv:2204.02311, 2022. [22] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar et al., â Inner monologue: Embod- ied reasoning through planning with language models,â | 2308.01552#34 | 2308.01552#36 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#36 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | arXiv preprint arXiv:2207.05608, 2022. [23] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch, â Language models as zero-shot planners: Extracting actionable knowledge for embodied agents,â in International Conference on Machine Learning. PMLR, 2022, pp. 9118â 9147. [24] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, â | 2308.01552#35 | 2308.01552#37 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#37 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Chatgpt for robotics: Design principles and model abilities,â 2023, 2023. [25] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi, â Chatgpt Empowered Long-Step Robot Control in Various Environ- ments: A Case Application,â arXiv preprint arXiv:2304.03893, 2023. [26] G. Lu, S. Li, G. Mai, J. Sun, D. Zhu, L. Chai, H. Sun, X. Wang, H. Dai, N. Liu et al., â AGI for agriculture,â | 2308.01552#36 | 2308.01552#38 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#38 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | arXiv preprint arXiv:2304.06136, 2023. [27] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma et al., â Scaling instruction-finetuned language models,â arXiv preprint arXiv:2210.11416, 2022. [28] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., â Emergent abilities of large language models,â arXiv preprint arXiv:2206.07682, 2022. [29] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. | 2308.01552#37 | 2308.01552#39 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#39 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Chi, Q. Le, and D. Zhou, â Chain of thought prompting elicits reasoning in large language models,â arXiv preprint arXiv:2201.11903, 2022. [30] A. Creswell and M. Shanahan, â Faithful reasoning using large language models,â arXiv preprint arXiv:2208.14271, 2022. [31] M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan, A. Lewkowycz, M. Bosma, D. Luan et al., â Show your work: Scratchpads for intermediate computation with language models,â arXiv preprint arXiv:2112.00114, 2021. [32] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang, â | 2308.01552#38 | 2308.01552#40 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#40 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents,â arXiv preprint arXiv:2302.01560, 2023. [33] V. Nair, E. Schumacher, G. Tso, and A. Kannan, â DERA: enhanc- ing large language model completions with dialog-enabled resolving agents,â arXiv preprint arXiv:2303.17071, 2023. [34] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang et al., â | 2308.01552#39 | 2308.01552#41 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#41 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Self-refine: Iterative refinement with self-feedback,â arXiv preprint arXiv:2303.17651, 2023. [35] M. Shridhar, J. Thomason, D. Gordon, Y. Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, and D. Fox, â Alfred: A benchmark for interpret- ing grounded instructions for everyday tasks,â in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 740â | 2308.01552#40 | 2308.01552#42 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#42 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | 10 749. # APPENDIX A ALFWORLD EXPERIMENT DETAILS Hereâ s an example in the prompt of InterAct in heat task. In the prompt, we enclose the items that need to be found within double square brackets, so that they can be fed to the checker and sorter later. InterAct will search for the items based on the sorting result. Each time we arrive at a location, we ask the checker to find the required item. If it is found, we consider the things present at that location as observations and return them to the LLM. If the item is not found, we ask the checker to return the message â Object is not here.â . | 2308.01552#41 | 2308.01552#43 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#43 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | # <heat> Prompt You are in the middle of a room. Looking quickly around you, you see a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 3, a countertop 2, a countertop 1, a diningtable 1, a drawer 6, a drawer 5, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. Your task is to: heat some egg and put it in diningtable. > think: To solve the task, I need to find and take an egg, then heat it with microwave, then put it in diningtable. OK. > think: First I need to find an [[egg]]. OK. > sorting(egg) [a fridge 1, a countertop 1, a countertop 2, a countertop 3, a diningtable 1, a sinkbasin 1, a garbagecan 1, a microwave 1, a cabinet 1, a cabinet 2, a cabinet 3, a cabinet 4, a cabinet 5, a cabinet 6, a cabinet 7, a cabinet 8, a cabinet 9, a cabinet 10, a stoveburner 1, a stoveburner 2, a stoveburner 3, a stoveburner 4, a drawer 1, a drawer 2, a drawer 3, a drawer 4, a drawer 5, a drawer 6, a coffeemachine 1, a toaster 1] > think: | 2308.01552#42 | 2308.01552#44 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#44 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | Based on the sorting provided, I can check fridge (1), countertop (1-3), diningtable (1), sinkbasin (1), garbagecan (1), microwave (1), cabinet (1-10), stoveburner (1-4), drawer (1-6), coffeemachine (1), toaster (1) one by one to find egg, starting with fridge 1. OK. > go to fridge 1 The fridge 1 is closed. > open fridge 1 egg is not here. > go to countertop 1 egg is not here. > go to countertop 2 egg is not here. > go to countertop 3 On the countertop 3, you see a bowl 1, a butterknife 1, a egg 2, a kettle 2, a plate 1, a sink 1, and a spatula 2. > think: Now I find an egg (2). Next, I need to take it. OK. > take egg 2 from countertop 3 You pick up the egg 2 from the countertop 3. > think: Now I take an egg (2). Next, I need go to a microwave (1) and heat it. OK. > go to microwave 1 The microwave 1 is closed. > heat egg 2 with microwave 1 You heat the egg 2 using the microwave 1. > think: Now I heat an egg (2). Next, I need to put it in/on diningtable 1. OK. > go to diningtable 1 On the diningtable 1, you see a apple 2, a bread 3, a egg 1, a kettle 1, a knife 1, a mug 1, a papertowelroll 1, a peppershaker 2, a potato 1, a soapbottle 1, and a spatula 1. > put egg 2 in/on diningtable 1 You put the egg 2 in/on the diningtable 1. In the â pick2â | 2308.01552#43 | 2308.01552#45 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#45 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | task, we have incorporated new trajectory planning. After locating and placing the first object in the appropriate receptacle, we enable the model to independently generate a trajectory, ensuring that the path does not overlook areas where the second object may be located. # <picktwo> Prompt You are in the middle of a room. Looking quickly around you, you see a armchair 2, a armchair 1, a bed 1, a countertop 1, a diningtable 1, a drawer 2, a drawer 1, a dresser 1, a laundryhamper 1, and a sidetable 1. Your task is to: put two creditcard in dresser. > think: To solve the task, I need to find and take the first creditcard, then put it in dresser, then find and take the second creditcard, then put it in dresser. OK. > think: First I need to find the first [[creditcard]]. OK. > sorting(creditcard) [countertop 1, diningtable 1, sidetable 1, armchair 1, armchair 2, bed 1, drawer 1, drawer 2, dresser 1, laundryhamper 1] > think: Based on the sorting provided, I can check couter- top (1), diningtable (1), sidetable (1), armchair (1-2), bed (1), drawer (1-2), dresser (1), laundryhamper (1) one by one to find creditcard, starting with countertop 1. OK. > go to countertop 1 creditcard is not here. > go to diningtable 1 On the diningtable 1, you see a cellphone 2, a creditcard 4, a creditcard 3, a creditcard 2, a mirror 1, a pencil 2, and a pencil 1. > think: Now I find the first creditcard (2). Next, I need to take it. OK. > take creditcard 2 from diningtable 1 You pick up the creditcard 2 from the diningtable 1. > think: Now I take the first creditcard (2). Next, I need to put it in/on dresser 1. | 2308.01552#44 | 2308.01552#46 | 2308.01552 | [
"2206.07682"
]
|
2308.01552#46 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | OK. > go to dresser 1 On the dresser 1, you see a mug 1, and a television 1. > put creditcard 2 in/on dresser 1 You put the creditcard 2 in/on the dresser 1. > think: Now I put the first creditcard in dresser. Next, I need to find the second [[creditcard]]. I should go back to diningtable 1 to see if there is more creditcard. If not, I will continue to check sidetable (1), armchair (1-2), bed (1), drawer (1-2), dresser (1), laundryhamper (1) one by one to find credit- card. OK. > go to diningtable 1 On the diningtable 1, you see a cellphone 2, a creditcard 4, a creditcard 3, a mirror 1, a pencil 2, and a pencil 1. > think: Now I find the second creditcard (3). Next, I need to take it. OK. > take creditcard 3 from diningtable 1 You pick up the creditcard 3 from the diningtable 1. > think: Now I take the second creditcard (3). Next, I need to put it in/on dresser 1. OK. > go to dresser 1 On the dresser 1, you see a creditcard 2, a mug 1, and a television 1. > put creditcard 3 in/on dresser 1 You put the creditcard 3 in/on the dresser 1. | 2308.01552#45 | 2308.01552 | [
"2206.07682"
]
|
|
2308.01542#0 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | 3 2 0 2 g u A 3 ] C H . s c [ 1 v 2 4 5 1 0 . 8 0 3 2 : v i X r a # Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents # Ziheng Huang [email protected] University of Californiaâ San Diego San Diego, CA, USA Sebastian Gutierrez [email protected] Temple University Philadelphia, PA, USA Hemanth Kamana [email protected] Temple University Philadelphia, PA, USA # Stephen MacNeil [email protected] Temple University Philadelphia, PA, USA | 2308.01542#1 | 2308.01542 | [
"2210.08750"
]
|
|
2308.01542#1 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | ABSTRACT The recent advent of large language models (LLM) has resulted in high-performing conversational agents such as chatGPT. These agents must remember key information from an ongoing conversa- tion to provide responses that are contextually relevant to the user. However, these agents have limited memory and can be distracted by irrelevant parts of the conversation. While many strategies exist to manage conversational memory, users currently lack affordances for viewing and controlling what the agent remembers, resulting in a poor mental model and conversational breakdowns. In this paper, we present Memory Sandbox, an interactive system and design probe that allows users to manage the conversational memory of LLM-powered agents. By treating memories as data objects that can be viewed, manipulated, recorded, summarized, and shared across conversations, Memory Sandbox provides interaction affordances for users to manage how the agent should â | 2308.01542#0 | 2308.01542#2 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#2 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | seeâ the conversation. CCS CONCEPTS â ¢ Computing methodologies â Intelligent agents; â ¢ Human- centered computing â Interactive systems and tools; # KEYWORDS Human-AI Interaction, Large Language Models, Chatbots Multiple strategies have been introduced to manage agentsâ conversational memory. For example, the conversation can be automatically summarized [21] and refined [24] to reduce redun- dancy while maintaining key information. Some systems selectively store [12, 22] and update [1] key memories. Relevant memories can also be retrieved based on the user input [1, 15, 21]. However, these memory management strategies are hidden behind the interface, resulting in a lack of transparency. Users often do not know what strategy is being used and have limited control over it. This makes it difficult for users to repair conversational breakdowns that happen when there is a misalignment between how the agent manages the memory and how the user perceives the conversation. We present Memory sandbox, shown in Figure 1, a system that allows users to see and manage the memory of conversational agents to align with user understanding of the conversation. Mem- ory Sandbox transforms conversational memory, previously man- aged behind the user interface, into interactive memory objects within the interface. Users can manipulate the visibility and con- tent of memory objects, spatially rearrange them, and share them across conversations. We make the following contributions: 1) The conceptualization of memory objects which makes conversational memory transparent and interactive and 2) The Memory Sandbox system that offers novel interaction affordances for users to view and manipulate the conversational memory of an intelligent agent. 1 INTRODUCTION Large Language Models (LLMs) are currently capable of generating human-like responses in open-domain tasks [4]. This has led to a new generation of conversational agents, such as chatGPT, which are now being widely used across domains. To ensure that agents generate responses that are contextually relevant and coherent to an ongoing conversation, these agents must maintain a working memory of the conversational history that has occurred up to that point in the conversation. The default strategy is to use as much of the conversational history as will fit within the input size limit of the LLM. Parts of the conversations that go beyond that buffer limit are forgotten, which leads to breakdowns when users assume the model remembers past context. | 2308.01542#1 | 2308.01542#3 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#3 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | Additionally, as the input buffer size increases, the performance of the LLM degrades as it struggles to retrieve relevant context and can be distracted by irrelevant context [11, 18]. This problem is compounded because users do not know how the LLM is leveraging the memory to generate responses. 2 SYSTEM OVERVIEW Memory sandbox is a system that provides users with the ability to view and manipulate the memory model of an intelligent agent, resulting in a shared representation of their ongoing conversation. Memory Sandbox introduces the concept of a memory object, an in- teractive piece of conversational history that can be moved, edited, deleted, or combined with other memory objects through sum- marization. | 2308.01542#2 | 2308.01542#4 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#4 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The interface is implemented in Next.js and uses the GPT-3.5 turbo model from the OpenAI API. Below we present the features of Memory Sandbox to help end users view and manage an LLM-powered agentâ s memory model. 2.1 View and manipulate memory objects Explainable AI research seeks to help people form mental models of intelligent systems [17]. Transparency of the inner workings of the system [6, 23] and interactivity to probe and manipulate the Huang, et al. Conversation 1 Conversation 2 assistant _ hello thore! im a design process chatbot that can guide assistant _ hello there! im a systems thinking chatbot that help you > = youthrough the design process! how can | assist you seo ¢ today? © Selected Memory For Summary ® @ user im working on de quality online e ing design solutions for high think through the interconnected stakeholders! how can ees * ©â Gj assist you today? @ summary _ stakeholders in online education include students, teachers, administrators, parents, support staff, set H © © G technical teams, and regulatory bodies. summary _ stakeholders in online education include students, = og = teachers, administrators, parents, support staff, Draggable S © 6S technical teams, and regulatory bodies. emer be ae user before going into specific stakeholders, what are some 5 4 common obstacles ofall these stakeholders ? s@é user what should be my next steps ? set Figure 1: | 2308.01542#3 | 2308.01542#5 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#5 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | Memory Sandbox is a system that enables users to see and manage the memory of conversational agents. Memory Sandbox provides the following interaction affordances: 1) toggle memory visibility, 2) add memory, 3) edit memory, 4) delete memory, 5) summarize memory, 6) create a new conversation, and 7) share memory. system [16] have been demonstrated to help people interpret and interact with intelligent systems to achieve their goals. Memory Sandbox makes the conversational memory explicit through the use of â memory objectsâ which can be viewed and manipulated within the interface. This was inspired by prior work that â objectifiesâ tools [2, 3] and attributes [20] to enable flexibility, expressiveness, and direct manipulation. This results in a â shared representationâ [7, 8] and common ground [5]â so what users see on the front-end is what an LLM would â seeâ on the back-end. Additionally, users can view, edit, add, and delete memory objects to directly control how the agent â seesâ the conversation. 2.2 Toggle memory object visibility As a conversation grows, LLMs must increasingly rely on their memory management strategy to infer meaning from the conversa- tion. However, in longer conversations, it is unclear what parts of the conversation are stored in memory or are attended to by the model [11]. This results in a poor mental model for users and a lack of control over what context is maintained and used by the agent. Memory Sandbox enables users to selectively hide or show mem- ory objects to control what context is shared with the agent. When the userâ s intent changes or the conversational context switches, the user can toggle the visibility of memory objects to hide or show parts of the conversation. As a signifier, hidden memory objects are grayed out within the interface. supplement extra context. Additionally, the arrangement of context is shown to have a significant effect on how well LLMs are able to leverage relevant context [11]. In Memory Sandbox, all the memory objects are draggable, allowing users to experiment and refine the ordering and placement of memory objects in a conversation. 2.4 Summarize memory objects Reminiscent of how humans attend to key aspects in a conver- sation [14], abstractive summarization distills a large amount of information to provide essential elements to the agent. | 2308.01542#4 | 2308.01542#6 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#6 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | Yet, what is considered as â key aspectsâ can vary for individuals, even in the same conversation [14]. Memory Sandbox enables uses to select memory objects that are summarized by the LLM. The resulting memory object represents the previous conversation and can be further refined by the user. The original conversation can be viewed by clicking on the summary. 2.5 Share memory objects across conversations Aligning with the goal of managing memory, Memory Sandbox also provides affordances for sharing memories across conversations. This offers a new way for users to engage with multiple agents outside of a single conversation thread. Unlike in conversations with people, the speaker doesnâ t need to repeat themselves in each conversation to establish a shared understanding. 2.3 Curate memory objects Discussants develop and refine their understanding as a conversa- tion unfolds [5]. Thus, Memory Sandbox provides controls for users to curate memory objects by editing an existing memory object to refine or update the context, deleting a memory object to remove completely irrelevant context, and adding a new memory object to Users can create and start multiple conversations with separate LLM-powered agents in the same 2D canvas. Memory objects can be shared and connected between conversations by dragging the memory object from one conversation to another. When dragging, memories are copied by reference to help the user identify the context source. Memory Sandbox 3 DISCUSSION Conversing is a collaborative activity where participants develop common ground through summarizing the discussion, repairing breakdowns, and emphasizing or de-emphasizing shared ideas [5]. Yet, existing chatbot interfaces do not provide affordances for under- standing how the agent â | 2308.01542#5 | 2308.01542#7 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#7 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | seesâ the conversation. Additionally, users can not rely on a theory of mind. These aspects result in a poor men- tal model for users and potential misalignment in understanding where conversational breakdown can occur. Memory Sandbox transforms previously implicitly managed con- versational memory behind the interface into interactive memory objects on the interface, exposing full control over the memory model of the agent to end users. By selectively hiding, showing, and curating memory representation, we can give users more con- trol over how the agent should â | 2308.01542#6 | 2308.01542#8 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#8 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | seeâ the conversation. In addition to curating memory in a single conversation, Memory Sandbox is also a design probe toward memory manipulation affordances for multi-agent interactions. By displaying multiple agents on the same screen and making memories interactive and draggable, Mem- ory Sandbox allows end users to selectively control the shared or unique memory each agent contains. Tools are beginning to emerge that focus on how users might in- teract with LLMs, including mapping UI affordances to an LLM [13], grounding human-AI collaboration in a shared artifact [9], provid- ing templates to facilitate prompt generation [10], and decomposing complex prompts to facilitate debugging [19]. In this paper, we pre- sented Memory Sandbox an interactive system that probes the design space of interaction techniques for memory management of LLMs. Our future work includes user studies to evaluate the efficacy of these techniques and potential trade-offs for implicit vs explicit memory management REFERENCES [1] Sanghwan Bae, Donghyun Kwak, Soyoung Kang, Min Young Lee, Sungdong Kim, Yuin Jeong, Hyeri Kim, Sang-Woo Lee, Woomyoung Park, and Nako Sung. 2022. Keep me updated! memory management in long-term conversations. arXiv preprint arXiv:2210.08750 (2022). [2] Benjamin B Bederson, James D Hollan, Allison Druin, Jason Stewart, David Rogers, and David Proft. 1996. Local tools: An alternative to tool palettes. In Proceedings of the 9th annual ACM symposium on User interface software and technology. 169â 170. [3] Eric A Bier, Maureen C Stone, Ken Pier, William Buxton, and Tony D DeRose. 1993. | 2308.01542#7 | 2308.01542#9 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#9 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | Toolglass and magic lenses: the see-through interface. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques. 73â 80. [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877â | 2308.01542#8 | 2308.01542#10 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#10 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | 1901. [5] Herbert H Clark and Edward F Schaefer. 1989. Contributing to discourse. Cogni- tive science 13, 2 (1989), 259â 294. [6] Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice. [7] In 23rd international conference on intelligent user interfaces. 211â 223. Jeffrey Heer. 2019. Agency plus automation: Designing artificial intelligence into interactive systems. | 2308.01542#9 | 2308.01542#11 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#11 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | Proceedings of the National Academy of Sciences 116, 6 (2019), 1844â 1850. [8] Eric Horvitz. 1999. Principles of Mixed-Initiative User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Pittsburgh, Pennsylvania, USA) (CHI â 99). Association for Computing Machinery, New York, NY, USA, 159â 166. https://doi.org/10.1145/302979.303030 [9] Ziheng Huang, Kexin Quan, Joel Chan, and Stephen MacNeil. 2023. CausalMap- per: Challenging designers to think in systems with Causal Maps and Large Language Model. In Proceedings of the 15th Conference on Creativity and Cognition. 325â 329. [10] Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J Cai. 2022. | 2308.01542#10 | 2308.01542#12 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#12 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | Promptmaker: Prompt-based prototyping with large language models. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1â 8. [11] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the Middle: How Language Models Use Long Contexts. arXiv:2307.03172 [cs.CL] [12] Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, and Ji-Rong Wen. 2021. One chatbot per person: Creating personalized chatbots based on implicit user profiles. | 2308.01542#11 | 2308.01542#13 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#13 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 555â 564. [13] Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, and Dan Mogil. 2023. Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances. arXiv preprint arXiv:2307.01142 (2023). John C McCarthy, Victoria C Miles, and Andrew F Monk. 1991. An experimental study of common ground in text-based communication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 209â 215. Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. | 2308.01542#12 | 2308.01542#14 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#14 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442 (2023). [14] [15] [16] Andrew Ross, Nina Chen, Elisa Zhao Hang, Elena L Glassman, and Finale Doshi- Velez. 2021. Evaluating the interpretability of generative models by interactive reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1â | 2308.01542#13 | 2308.01542#15 | 2308.01542 | [
"2210.08750"
]
|
2308.01542#15 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | 15. [17] Heleen Rutjes, Martijn Willemsen, and Wijnand IJsselsteijn. 2019. Considerations on explainable AI and usersâ mental models. In CHI 2019 Workshop: Where is the Human? Bridging the Gap Between AI and HCI. Association for Computing Machinery, Inc. [18] Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou. 2023. | 2308.01542#14 | 2308.01542#16 | 2308.01542 | [
"2210.08750"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.