bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=yQT406rH72
@inproceedings{ wu2023pairwise, title={Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for {LLM} Alignment}, author={Tianhao Wu and Banghua Zhu and Ruoyu Zhang and Zhaojin Wen and Kannan Ramchandran and Jiantao Jiao}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=yQT406rH72} }
Large Language Models (LLMs) can acquire extensive world knowledge through pre-training on large corpora. However, due to exposure to low-quality data, LLMs may exhibit harmful behavior without aligning with human values. The dominant approach for steering LLMs towards beneficial behavior involves Reinforcement Learning with Human Feedback (RLHF), with Proximal Policy Optimization (PPO) serving as the default RL optimizer. Despite its effectiveness, PPO has limitations when optimizing rewards trained from comparison-based loss. Primarily, PPO is not invariant to equivalent reward functions containing identical preference information due to the need to calibrate the reward scale. Additionally, PPO's necessity for token-wise updates introduces complexity in both function approximation and algorithm design compared to trajectory-wise optimization. This paper proposes a new framework, reinforcement learning with relative feedback, and a novel trajectory-wise policy gradient algorithm, Pairwise Proximal Policy Optimization (P3O) that operates directly on comparative rewards. We show theoretically that P3O is invariant to equivalent rewards and avoids the complexity of PPO. Empirical evaluations demonstrate that P3O outperforms PPO in the KL-Reward trade-off and can align with human preferences as well as or better than prior methods. In summary, this work introduces a simpler yet effective approach for aligning LLMs to human preferences through relative feedback.
Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment
[ "Tianhao Wu", "Banghua Zhu", "Ruoyu Zhang", "Zhaojin Wen", "Kannan Ramchandran", "Jiantao Jiao" ]
Workshop/FMDM
2310.00212
[ "" ]
https://huggingface.co/papers/2310.00212
0
2
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=yD9JAKItJE
@inproceedings{ snyder2023target, title={Target Rate Optimization: Avoiding Iterative Error Exploitation}, author={Braham Snyder and Amy Zhang and Yuke Zhu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=yD9JAKItJE} }
Many real-world reinforcement learning (RL) problems remain intractable. A key issue is that sample-efficient RL algorithms are unstable. Early stopping sometimes works around this. Yet early stopping in RL can be difficult, since the instability itself can result in few training steps having good policies. Standard early stopping stops all learning. Fixing the early stopping implicitly used with most target networks might be more robust. That is, in algorithms like DQN, the target update rate already early-stops DQN’s target-fitting subproblems. Currently, practitioners must either hope the default target rate performs well, or tune it with an expensive grid search over online returns. Moreover, within a run, algorithms like DQN continue to update the target even when the updates _increase_ the training error. This degrades value estimates, which degrades returns. Newer off-policy and offline RL algorithms lessen this well-known deadly triad divergence, but often require excessive pessimism to avoid it, gaining stability but at lower return. To combat these issues, we propose adding optimization of the training error w.r.t. the target update rate. Our algorithm, Target Rate Optimization, empirically prevents divergence and increases return by up to ~3× on a handful of discrete- and continuous-action RL problems.
Target Rate Optimization: Avoiding Iterative Error Exploitation
[ "Braham Snyder", "Amy Zhang", "Yuke Zhu" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xs0fA5iSYv
@inproceedings{ shi2023unleashing, title={Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning}, author={Ruizhe Shi and Yuyao Liu and Yanjie Ze and Simon Du and Huazhe Xu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=xs0fA5iSYv} }
Offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets. Given recent advances in Large Language Models (LLMs) and their few-shot learning prowess, this paper introduces $\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$), a general framework based on Decision Transformers to effectively use pre-trained Language Models (LMs) for offline RL. Our framework highlights four crucial components: (1) Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to full-weight fine-tuning, to combine the pre-trained knowledge from LMs and in-domain knowledge effectively, (3) using the non-linear MLP transformation instead of linear projections, to generate embeddings, and (4) integrating an auxiliary language prediction loss during fine-tuning to stabilize the LMs and retain their original abilities on languages. Empirical results indicate $\textbf{LaMo}$ achieves state-of-the-art performance in sparse-reward tasks and closes the gap between value-based offline RL methods and decision transformers in dense-reward tasks. In particular, our method demonstrates superior performance in scenarios with limited data samples.
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning
[ "Ruizhe Shi", "Yuyao Liu", "Yanjie Ze", "Simon Shaolei Du", "Huazhe Xu" ]
Workshop/FMDM
2310.20587
[ "https://github.com/srzer/LaMo-2023" ]
https://huggingface.co/papers/2310.20587
3
16
1
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=xbOtG1P723
@inproceedings{ jiang2023hgap, title={H-{GAP}: Humanoid Control with a Generalist Planner}, author={zhengyao jiang and Yingchen Xu and Nolan Wagener and Yicheng Luo and Michael Janner and Edward Grefenstette and Tim Rockt{\"a}schel and Yuandong Tian}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=xbOtG1P723} }
Humanoid control is an important research challenge offering avenues for integration into human-centric infrastructures and enabling physics-driven humanoid animations. The daunting challenges in this field stem from the difficulty of optimizing in high-dimensional action spaces and the instability introduced by the bipedal morphology of humanoids. However, the extensive collection of human motion-captured data and the derived datasets of humanoid trajectories, such as MoCapAct, paves the way to tackle these challenges. In this context, we present Humanoid Generalist Autoencoding Planner (H-GAP), a state-action trajectory generative model trained on humanoid trajectories derived from human motion-captured data, capable of adeptly handling downstream control tasks with Model Predictive Control (MPC). For 56 degrees of freedom humanoid, we empirically demonstrate that H-GAP learns to represent and generate a wide range of motor behaviours. Further, without any learning from online interactions, it can also flexibly transfer these behaviours to solve novel downstream control tasks via planning. Notably, H-GAP excels established MPC baselines with access to the ground truth model, and is superior or comparable to offline RL methods trained for individual tasks. Finally, we do a series of empirical studies on the scaling properties of H-GAP, showing the potential for performance gains via additional data but not computing.
H-GAP: Humanoid Control with a Generalist Planner
[ "zhengyao jiang", "Yingchen Xu", "Nolan Wagener", "Yicheng Luo", "Michael Janner", "Edward Grefenstette", "Tim Rocktäschel", "Yuandong Tian" ]
Workshop/FMDM
2312.02682
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vnhgDP4Ise
@inproceedings{ logeswaran2023reasoning, title={Reasoning about Action Preconditions with Programs}, author={Lajanugen Logeswaran and Sungryull Sohn and Yiwei Lyu and Anthony Liu and Dong-Ki Kim and Dongsub Shim and Moontae Lee and Honglak Lee}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=vnhgDP4Ise} }
One of the fundamental skills required for an agent acting in an environment to complete tasks is the ability to understand what actions are plausible at any given point. This work explores a novel use of code representations to reason about action preconditions for sequential decision making tasks. Code representations offer the flexibility to model procedural activities and associated constraints as well as the ability to execute and verify constraint satisfaction. Leveraging code representations, we decompose the problem of learning an agent policy for sequential decision making tasks into the sub-problems of precondition inference and action prediction. We show that these sub-problems can be formulated as code-completion problems and exploit pre-trained code understanding models to tackle them. We demonstrate that the proposed code representation coupled with our novel precondition-aware action prediction strategy outperforms prior policy learning approaches in a few-shot learning setting across task-oriented dialog and embodied textworld benchmarks.
Reasoning about Action Preconditions with Programs
[ "Lajanugen Logeswaran", "Sungryull Sohn", "Yiwei Lyu", "Anthony Liu", "Dong-Ki Kim", "Dongsub Shim", "Moontae Lee", "Honglak Lee" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v67CRd1Fap
@inproceedings{ schumann2023velma, title={{VELMA}: Verbalization Embodiment of {LLM} Agents for Vision and Language Navigation in Street View}, author={Raphael Schumann and Wanrong Zhu and Weixi Feng and Tsu-Jui Fu and Stefan Riezler and William Yang Wang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=v67CRd1Fap} }
Incremental decision making in real-world environments is one of the most challenging tasks in embodied artificial intelligence. One particularly demanding scenario is Vision and Language Navigation (VLN) which requires visual and natural language understanding as well as spatial and temporal reasoning capabilities. The embodied agent needs to ground its understanding of navigation instructions in observations of a real-world environment like Street View. Despite the impressive results of LLMs in other research areas, it is an ongoing problem of how to best connect them with an interactive visual environment. In this work, we propose VELMA, an embodied LLM agent that uses a verbalization of the trajectory and of visual environment observations as contextual prompt for the next action. Visual information is verbalized by a pipeline that extracts landmarks from the human written navigation instructions and uses CLIP to determine their visibility in the current panorama view. We show that VELMA is able to successfully follow navigation instructions in Street View with only two in-context examples. We further finetune the LLM agent on a few thousand examples and achieve 25%-30% relative improvement in task completion over the previous state-of-the-art for two datasets.
VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View
[ "Raphael Schumann", "Wanrong Zhu", "Weixi Feng", "Tsu-Jui Fu", "Stefan Riezler", "William Yang Wang" ]
Workshop/FMDM
2307.06082
[ "https://github.com/raphael-sch/velma" ]
https://huggingface.co/papers/2307.06082
1
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=tlRUbI0Yf3
@inproceedings{ li2023chain, title={Chain of Code: Reasoning with a Language Model-Augmented Code Interpreter}, author={Chengshu Li and Jacky Liang and Fei Xia and Andy Zeng and Sergey Levine and Dorsa Sadigh and Karol Hausman and Xinyun Chen and Li Fei-Fei and brian ichter}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=tlRUbI0Yf3} }
Code provides a general syntactic structure to build complex programs and perform precise computations when paired with a code interpreter – we hypothesize that language models (LMs) can leverage code-writing to improve Chain of Thought reasoning not only for logic and arithmetic tasks, but also for semantic ones (and in particular, those that are a mix of both). For example, consider prompting an LM to write code that counts the number of times it detects sarcasm in an essay: the LM may struggle to write an implementation for "detect_sarcasm(string)" that can be executed by the interpreter (handling the edge cases would be insurmountable). However, LMs may still produce a valid solution if they not only write code, but also selectively "emulate" the interpreter by generating the expected output of "detect_sarcasm(string)" and other lines of code that cannot be executed. In this work, we propose Chain of Code (CoC), a simple yet surprisingly effective extension that improves LM code-driven reasoning. The key idea is to encourage LMs to format semantic sub-tasks in a program as flexible pseudocode that the interpreter can explicitly catch undefined behaviors and hand off to simulate with an LM (as an "LMulator"). Experiments demonstrate that Chain of Code outperforms Chain of Thought and other baselines across a variety of benchmarks; on BIG-Bench Hard, Chain of Code achieves 84%, a gain of 12% over Chain of Thought. CoC scales well with large and small models alike, and broadens the scope of reasoning questions that LMs can correctly answer by "thinking in code". Project website: https://chain-of-code.github.io
Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
[ "Chengshu Li", "Jacky Liang", "Andy Zeng", "Xinyun Chen", "Karol Hausman", "Dorsa Sadigh", "Sergey Levine", "Li Fei-Fei", "Fei Xia", "brian ichter" ]
Workshop/FMDM
2312.04474
[ "" ]
https://huggingface.co/papers/2312.04474
6
29
3
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=tTZVrnr45N
@inproceedings{ yuan2023skill, title={Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks}, author={Haoqi Yuan and Chi Zhang and Hongcheng Wang and Feiyang Xie and Penglin Cai and Hao Dong and Zongqing Lu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=tTZVrnr45N} }
We study building multi-task agents in open-world environments. Without human demonstrations, learning to accomplish long-horizon tasks in a large open-world environment with reinforcement learning (RL) is extremely inefficient. To tackle this challenge, we convert the multi-task learning problem into learning basic skills and planning over the skills. Using the popular open-world game Minecraft as the testbed, we propose three types of fine-grained basic skills, and use RL with intrinsic rewards to acquire skills. A novel Finding-skill that performs exploration to find diverse items provides better initialization for other skills, improving the sample efficiency for skill learning. In skill planning, we leverage the prior knowledge in Large Language Models to find the relationships between skills and build a skill graph. When the agent is solving a task, our skill search algorithm walks on the skill graph and generates the proper skill plans for the agent. In experiments, our method accomplishes 40 diverse Minecraft tasks, where many tasks require sequentially executing for more than 10 skills. Our method outperforms baselines by a large margin and is the most sample-efficient demonstration-free RL method to solve Minecraft Tech Tree tasks. The project's website and code can be found at https://sites.google.com/view/plan4mc.
Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks
[ "Haoqi Yuan", "Chi Zhang", "Hongcheng Wang", "Feiyang Xie", "Penglin Cai", "Hao Dong", "Zongqing Lu" ]
Workshop/FMDM
2303.16563
[ "" ]
https://huggingface.co/papers/2303.16563
0
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=sv7KZcUqu1
@inproceedings{ jarrett2023language, title={Language Agents as Digital Representatives in Collective Decision-Making}, author={Daniel Jarrett and Miruna Pislar and Michael Tessler and Michiel Bakker and Raphael Koster and Jan Balaguer and Romuald Elie and Christopher Summerfield and Andrea Tacchetti}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=sv7KZcUqu1} }
Consider the process of collective decision-making, in which a group of individuals interactively select a preferred outcome from among a universe of alternatives. In this context, "representation" is the activity of making an individual's preferences present in the process via participation by a proxy agent---i.e. their "representative". To this end, learned models of human behavior have the potential to fill this role, with practical implications for multi-agent scenario studies and mechanism design. In this work, we investigate the possibility of training *language agents* to behave in the capacity of representatives of human agents, appropriately expressing the preferences of those individuals whom they stand for. First, we formalize the setting of *collective decision-making*---as the episodic process of interaction between a group of agents and a decision mechanism. On this basis, we then formalize the problem of *digital representation*---as the simulation of an agent's behavior to yield equivalent outcomes from the mechanism. Finally, we conduct an empirical case study in the setting of *consensus-finding* among diverse humans, and demonstrate the feasibility of fine-tuning large language models to act as digital representatives.
Language Agents as Digital Representatives in Collective Decision-Making
[ "Daniel Jarrett", "Miruna Pislar", "Michiel A. Bakker", "Michael Henry Tessler", "Raphael Koster", "Jan Balaguer", "Romuald Elie", "Christopher Summerfield", "Andrea Tacchetti" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=siFopuPuCS
@inproceedings{ nottingham2023selective, title={Selective Perception: Learning Concise State Descriptions for Language Model Actors}, author={Kolby Nottingham and Yasaman Razeghi and Kyungmin Kim and JB Lanier and Pierre Baldi and Roy Fox and Sameer Singh}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=siFopuPuCS} }
It is increasingly common for large language models (LLMs) to be applied as actors in sequential decision making problems in embodied domains such as robotics and games, due to their general world knowledge and planning abilities. However, LLMs are not natively trained for embodied decision making problems, and expressing complex state spaces in text is non-trivial. Exhaustively describing high-dimensional states leads to prohibitive inference costs and impaired task performance due to distracting or irrelevant information. Previous LLM actors avoid the issue by relying on hand-engineered, task-specific protocols to determine which features to communicate about a state and which to leave out. In this work, we propose BLINDER (Brief Language INputs for DEcision-making Responses), a method for learning to select concise and helpful sets of state features for LLM actors. BLINDER learns a value function for task-conditioned state descriptions that approximates the likelihood that a state description will result in optimal actions. We evaluate BLINDER on the challenging video game NetHack and a real-world robotic manipulation task. We find that we are able to reduce the length of state descriptions by 87% and 99% on NetHack and robotic manipulation tasks respectively. BLINDER also improves task success rates by 158% and 54% on those same tasks and generalizes to LLM actors of various size and quality.
Selective Perception: Learning Concise State Descriptions for Language Model Actors
[ "Kolby Nottingham", "Yasaman Razeghi", "Kyungmin Kim", "JB Lanier", "Pierre Baldi", "Roy Fox", "Sameer Singh" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=sYFFyAILy7
@inproceedings{ ma2023laser, title={{LASER}: {LLM} Agent with State-Space Exploration for Web Navigation}, author={Kaixin Ma and Hongming Zhang and Hongwei Wang and Xiaoman Pan and Dong Yu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=sYFFyAILy7} }
Large language models (LLMs) have been successfully adapted for interactive decision-making tasks like web navigation. While achieving decent performance, previous methods implicitly assume a forward-only execution mode for the model, where they only provide oracle trajectories as in-context examples to teach the model how to reason in the interactive environment. Consequently, the model could not handle more challenging scenarios not covered in the in-context examples, e.g., mistakes, leading to sub-optimal performance. To address this issue, we propose to model the interactive task as state space exploration, where the LLM agent transitions among a pre-defined set of states by performing actions to complete the task. This formulation enables flexible backtracking, allowing the model to easily recover from errors. We evaluate our proposed LLM Agent with State-Space ExploRation (LASER) on the WebShop task. Experimental results show that our LASER agent significantly outperforms previous methods and closes the gap with human performance on the web navigation task.
LASER: LLM Agent with State-Space Exploration for Web Navigation
[ "Kaixin Ma", "Hongming Zhang", "Hongwei Wang", "Xiaoman Pan", "Dong Yu" ]
Workshop/FMDM
2309.08172
[ "https://github.com/mayer123/laser" ]
https://huggingface.co/papers/2309.08172
2
11
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=s87s0HHHOA
@inproceedings{ zhou2023free, title={Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning}, author={Zhaoyi Zhou and Chuning Zhu and Runlong Zhou and Qiwen Cui and Abhishek Gupta and Simon Du}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=s87s0HHHOA} }
Off-policy dynamic programming (DP) techniques such as $Q$-learning have proven to be important in sequential decision-making problems. In the presence of function approximation, however, these techniques often diverge due to the absence of Bellman-completeness in the function classes considered, a crucial condition for the success of DP-based methods. In this paper, we show how off-policy learning techniques based on return-conditioned supervised learning (RCSL) are able to circumvent these challenges of Bellman completeness, converging under significantly more relaxed assumptions inherited from supervised learning. We prove there exists a natural environment in which if one uses two-layer multilayer perceptron as the function approximator, the layer width needs to grow *linearly* with the state space size to satisfy Bellman-completeness while a constant layer width is enough for RCSL. These findings take a step towards explaining the superior empirical performance of RCSL methods compared to DP-based methods in environments with near-optimal datasets. Furthermore, in order to learn from sub-optimal datasets, we propose a simple framework called MBRCSL, granting RCSL methods the ability of dynamic programming to stitch together segments from distinct trajectories. MBRCSL leverages learned dynamics models and forward sampling to accomplish trajectory stitching while avoiding the need for Bellman completeness that plagues all dynamic programming algorithms. We propose both theoretical analysis and experimental evaluation to back these claims, outperforming state-of-the-art model-free and model-based offline RL algorithms across several simulated robotics problems.
Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning
[ "Zhaoyi Zhou", "Chuning Zhu", "Runlong Zhou", "Qiwen Cui", "Abhishek Gupta", "Simon Shaolei Du" ]
Workshop/FMDM
2310.19308
[ "https://github.com/zhaoyizhou1123/mbrcsl" ]
https://huggingface.co/papers/2310.19308
0
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=rngOtn5p7t
@inproceedings{ chen2023towards, title={Towards End-to-End Embodied Decision Making with Multi-modal Large Language Model}, author={Liang Chen and Yichi Zhang and Shuhuai Ren and Haozhe Zhao and Zefan Cai and Yuchi Wang and Tianyu Liu and Baobao Chang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=rngOtn5p7t} }
In this study, we explore the potential of Multimodal Large Language Models (MLLMs) in improving embodied decision-making processes for agents. While Large Language Models (LLMs) have been widely used due to their advanced reasoning skills and vast world knowledge, MLLMs like GPT4-Vision offer enhanced visual understanding and reasoning capabilities. We investigate whether state-of-the-art MLLMs can handle embodied decision-making in an end-to-end manner and whether collaborations between LLMs and MLLMs can enhance decision-making. To address these questions, we introduce a new benchmark called PCA-EVAL, which evaluates embodied decision-making from the perspectives of Perception, Cognition, and Action. Additionally, we propose HOLMES, a multi-agent cooperation framework that allows LLMs to leverage MLLMs and APIs to gather multimodal information for informed decision-making. We compare end-to-end embodied decision-making and HOLMES on our benchmark and find that the GPT4-Vision model demonstrates strong end-to-end embodied decision-making abilities, outperforming GPT4-HOLMES in terms of average decision accuracy (+3%). However, this performance is exclusive to the latest GPT4-Vision model, surpassing the open-source state-of-the-art MLLM by 26%. Our results indicate that powerful MLLMs like GPT4-Vision hold promise for decision-making in embodied agents, offering new avenues for MLLM research. Code and data are open at https://github.com/pkunlp-icler/PCA-EVAL/
Towards End-to-End Embodied Decision Making with Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
[ "Liang Chen", "Yichi Zhang", "Shuhuai Ren", "Haozhe Zhao", "Zefan Cai", "Yuchi Wang", "Tianyu Liu", "Baobao Chang" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pz3qgH8vE0
@inproceedings{ hong2023zeroshot, title={Zero-Shot Goal-Directed Dialogue via {RL} on Imagined Conversations}, author={Joey Hong and Sergey Levine and Anca Dragan}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=pz3qgH8vE0} }
Large language models (LLMs) have emerged as powerful and general solutions to many natural language tasks. However, many of the most important applications of language generation are interactive, where an agent has to talk to a person to reach a desired outcome. For example, a teacher might try to understand their student's current comprehension level to tailor their instruction accordingly, and a travel agent might ask questions of their customer to understand their preferences in order to recommend activities they might enjoy. LLMs trained with supervised fine-tuning or ``single-step'' RL, as with standard RLHF, might struggle which tasks that require such goal-directed behavior, since they are not trained to optimize for overall conversational outcomes after multiple turns of interaction. In this work, we explore a new method for adapting LLMs with RL for such goal-directed dialogue. Our key insight is that, though LLMs might not effectively solve goal-directed dialogue tasks out of the box, they can provide useful data for solving such tasks by simulating suboptimal but human-like behaviors. Given a textual description of a goal-directed dialogue task, we leverage LLMs to sample diverse synthetic rollouts of hypothetical in-domain human-human interactions. Our algorithm then utilizes this dataset with offline reinforcement learning to train an interactive conversational agent that can optimize goal-directed objectives over multiple turns. In effect, the LLM produces examples of possible interactions, and RL then processes these examples to learn to perform more optimal interactions. Empirically, we show that our proposed approach achieves state-of-the-art performance in various goal-directed dialogue tasks that include teaching and preference elicitation.
Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations
[ "Joey Hong", "Sergey Levine", "Anca Dragan" ]
Workshop/FMDM
2311.05584
[ "" ]
https://huggingface.co/papers/2311.05584
0
1
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=pxK9MWuFF8
@inproceedings{ boige2023pasta, title={{PASTA}: Pretrained Action-State Transformer Agents}, author={Raphael Boige and Yannis Flet-Berliac and Arthur Flajolet and Guillaume Richard and Thomas PIERROT}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=pxK9MWuFF8} }
Self-supervised learning has brought about a revolutionary paradigm shift in various computing domains, including NLP, vision, and biology. Recent approaches involve pre-training transformer models on vast amounts of unlabeled data, serving as a starting point for efficiently solving downstream tasks. In reinforcement learning, researchers have recently adapted these approaches, developing models pre-trained on expert trajectories. This advancement enables the models to tackle a broad spectrum of tasks, ranging from robotics to recommendation systems. However, existing methods mostly rely on intricate pre-training objectives tailored to specific downstream applications. This paper conducts a comprehensive investigation of models, referred to as pre-trained action-state transformer agents (PASTA). Our study covers a unified methodology and covers an extensive set of general downstream tasks including behavioral cloning, offline RL, sensor failure robustness, and dynamics change adaptation. Our objective is to systematically compare various design choices and offer valuable insights that will aid practitioners in developing robust models. Key highlights of our study include tokenization at the component level for actions and states, the use of fundamental pre-training objectives such as next token prediction or masked language modeling, simultaneous training of models across multiple domains, and the application of various fine-tuning strategies. In this study, the developed models contain fewer than 7 million parameters allowing a broad community to use these models and reproduce our experiments. We hope that this study will encourage further research into the use of transformers with first principle design choices to represent RL trajectories and contribute to robust policy learning.
PASTA: Pretrained Action-State Transformer Agents
[ "Raphael Boige", "Yannis Flet-Berliac", "Arthur Flajolet", "Guillaume Richard", "Thomas PIERROT" ]
Workshop/FMDM
2307.10936
[ "" ]
https://huggingface.co/papers/2307.10936
4
10
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=pI6ylnkPAD
@inproceedings{ zheng2023synapse, title={Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control}, author={Longtao Zheng and Rundong Wang and Xinrun Wang and Bo An}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=pI6ylnkPAD} }
Building agents with large language models (LLMs) for computer control is a burgeoning research area, where the agent receives computer states and performs actions to complete complex tasks. Previous computer agents have demonstrated the benefits of in-context learning (ICL); however, their performance is hindered by several issues. First, the limited context length of LLMs and complex computer states restrict the number of exemplars, as a single webpage can consume the entire context. Second, the exemplars in current methods, such as high-level plans and multi-choice questions, cannot represent complete trajectories, leading to suboptimal performance in long-horizon tasks. Third, existing computer agents rely on task-specific exemplars and overlook the similarity among tasks, resulting in poor generalization to novel tasks. To address these challenges, we introduce Synapse, a computer agent featuring three key components: i) state abstraction, which filters out task-irrelevant information from raw states, allowing more exemplars within the limited context, ii) trajectory-as-exemplar prompting, which prompts the LLM with complete trajectories of the abstracted states and actions for improved multi-step decision-making, and iii) exemplar memory, which stores the embeddings of exemplars and retrieves them via similarity search for generalization to novel tasks. We evaluate Synapse on MiniWoB++, a standard task suite, and Mind2Web, a real-world website benchmark. In MiniWoB++, Synapse achieves a 99.2% average success rate (a 10% relative improvement) across 64 tasks using demonstrations from only 48 tasks. Notably, Synapse is the first ICL method to solve the book-flight task in MiniWoB++. Synapse also exhibits a 56% relative improvement in average step success rate over the previous state-of-the-art prompting scheme in Mind2Web.
Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control
[ "Longtao Zheng", "Rundong Wang", "Xinrun Wang", "Bo An" ]
Workshop/FMDM
2306.07863
[ "https://github.com/ltzheng/synapse" ]
https://huggingface.co/papers/2306.07863
1
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=oBQVCTpKXW
@inproceedings{ zhang2023building, title={Building Cooperative Embodied Agents Modularly with Large Language Models}, author={Hongxin Zhang and Weihua Du and Jiaming Shan and Qinhong Zhou and Yilun Du and Joshua Tenenbaum and Tianmin Shu and Chuang Gan}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=oBQVCTpKXW} }
In this work, we address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments. While previous research either presupposes a cost-free communication channel or relies on a centralized controller with shared observations, we harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework that integrates with perception, memory, and execution. Thus building a Cooperative Embodied Language Agent CoELA, who can plan, communicate, and cooperate with others to accomplish long-horizon tasks efficiently. Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication. Though current Open LMs like LLAMA-2 still underperform, we fine-tune a CoLLAMA with data collected with our agents and show how they can achieve promising performance. We also conducted a user study for human-agent interaction and discovered that CoELA communicating in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for future research in multi-agent cooperation. Videos can be found on the project website https://llm-co.github.io/CoELA/ .
Building Cooperative Embodied Agents Modularly with Large Language Models
[ "Hongxin Zhang", "Weihua Du", "Jiaming Shan", "Qinhong Zhou", "Yilun Du", "Joshua Tenenbaum", "Tianmin Shu", "Chuang Gan" ]
Workshop/FMDM
2307.02485
[ "" ]
https://huggingface.co/papers/2307.02485
3
11
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nQp1FURyua
@inproceedings{ lubana2023fomo, title={FoMo rewards: Casting foundation models as generic reward functions}, author={Ekdeep Singh Lubana and Pim De Haan and Taco Cohen and Johann Brehmer}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=nQp1FURyua} }
We explore the viability of casting foundation models as generic reward functions for reinforcement learning. To this end, we propose a simple pipeline that interfaces an off-the-shelf vision model with a large language model. Specifically, given a trajectory of observations, we infer the likelihood of an instruction describing the task that the user wants an agent to perform. We show that this generic likelihood function exhibits the characteristics ideally expected from a reward function: it associates high values with the desired behaviour and lower values for several similar, but incorrect policies. Overall, our work opens the possibility of designing open-ended agents for interactive tasks via foundation models.
FoMo rewards: Can we cast foundation models as reward functions?
[ "Ekdeep Singh Lubana", "Johann Brehmer", "Pim De Haan", "Taco Cohen" ]
Workshop/FMDM
2312.03881
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=lw5GlytIY5
@inproceedings{ cui2023a, title={A Universal World Model Learned from Large Scale and Diverse Videos}, author={Hanchen Cui and Yang Gao}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=lw5GlytIY5} }
World models play a crucial role in model-based reinforcement learning (RL) by providing predictive representations of an agent in an environment and enabling the agent to reason about the future and make more informed decisions. However, there are still two main problems limiting the applications of world models. First, current methods typically train the world models using only massive domain-specific data, making it challenging to generalize to unseen scenarios or adapt to changes in the environments. Second, it is difficult to define the actions when world models are trained using in the wild videos. In this work, we tackle these two problems by learning a general purpose world model from a diverse and large scale real world video dataset with extracted latent actions. Specifically, our approach leverages a pre-trained vision encoder to project the images of two adjacent frames into states; then, extracts the latent actions into a low dimensional space based on vector quantization; finally, a dynamic function is learned using latent actions. Results show that the proposed generic world model can successfully extract latent actions of arbitrary neighboring frames when testing on in the wild video dataset. Furthermore, fine-tuning on only a small amount of in-domain data can significantly improve the accuracy of the generic world model when adapting to unseen environments.
A Universal World Model Learned from Large Scale and Diverse Videos
[ "Hanchen Cui", "Yang Gao" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ltUrSryS0K
@inproceedings{ light2023from, title={From Text to Tactic: Evaluating {LLM}s Playing the Game of Avalon}, author={Jonathan Light and Min Cai and Sheng Shen and Ziniu Hu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=ltUrSryS0K} }
In this paper, we explore the potential of Large Language Models (LLMs) Agents in playing the strategic social deduction game, \textbf{Resistance Avalon}. Players in Avalon are challenged not only to make informed decisions based on dynamically evolving game phases, but also to engage in discussions where they must deceive, deduce, and negotiate with other players. These characteristics make Avalon a compelling test-bed to study the decision-making and language-processing capabilities of LLM Agents. To facilitate research in this line, we introduce \textsc{AvalonBench} - a comprehensive game environment tailored for evaluating multi-agent LLM Agents. This benchmark incorporates: (1) a game environment for Avalon, (2) rule-based bots as baseline opponents, and (3) ReAct-style LLM agents with tailored prompts for each role. Notably, our evaluations based on \textsc{AvalonBench} highlight a clear capability gap. For instance, models like ChatGPT playing good-role got a win rate of 22.2\% against rule-based bots playing evil, while good-role bot achieves 38.2\% win rate in the same setting. We envision \textsc{AvalonBench} could be a good test-bed for developing more advanced LLMs (with self-playing) and agent frameworks that can effectively model the layered complexities of such game environments.
AvalonBench: Evaluating LLMs Playing the Game of Avalon
[ "Jonathan Light", "Min Cai", "Sheng Shen", "Ziniu Hu" ]
Workshop/FMDM
2310.05036
[ "https://github.com/jonathanmli/avalon-llm" ]
https://huggingface.co/papers/2310.05036
1
1
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=lodGl8nzyW
@inproceedings{ ni2023when, title={When Do Transformers Shine in {RL}? Decoupling Memory from Credit Assignment}, author={Tianwei Ni and Michel Ma and Benjamin Eysenbach and Pierre-Luc Bacon}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=lodGl8nzyW} }
Reinforcement learning (RL) algorithms face two distinct challenges: learning effective representations of past and present observations, and determining how actions influence future returns. Both challenges involve modeling long-term dependencies. The Transformer architecture has been very successful to solve problems that involve long-term dependencies, including in the RL domain. However, the underlying reason for the strong performance of Transformer-based RL methods remains unclear: is it because they learn effective memory, or because they perform effective credit assignment? After introducing formal definitions of memory length and credit assignment length, we design simple configurable tasks to measure these distinct quantities. Our empirical results reveal that Transformers can enhance the memory capability of RL algorithms, scaling up to tasks that require memorizing observations $1500$ steps ago. However, Transformers do not improve long-term credit assignment. In summary, our results provide an explanation for the success of Transformers in RL, while also highlighting an important area for future research and benchmark design.
When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment
[ "Tianwei Ni", "Michel Ma", "Benjamin Eysenbach", "Pierre-Luc Bacon" ]
Workshop/FMDM
2307.03864
[ "https://github.com/twni2016/pomdp-baselines" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=khL3G4iwd0
@inproceedings{ ramji2023selfselect, title={Self-Select: Optimizing Instruction Selection for Large Language Models}, author={Keshav Ramji and Alexander Kyimpopkin}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=khL3G4iwd0} }
The same question can often be presented in different ways, depending on the audience and the intent with which it is being posed. To determine whether large language models (LLMs) demonstrate preferences for one phrasing over another regardless of semantic content, we introduce _Self-Select_, a method for selection of a preferred instruction template, and generation of high-quality synthetic data samples. This algorithm makes use of a _meta-prompt_ to decide on an instruction template, given a task and candidate templates then generates $n$ new samples using the chosen template. We evaluate _Self-Select_ on numerical reasoning and sentiment classification tasks, using a variety of instruction-tuned and base models, providing insights into their abilities and biases in performing instruction selection. We find that permuting the instruction template ordering in the prompt leads to vastly different choice distributions, suggesting that decisions may be influenced more by inductive biases than by semantic understanding, even after instruction tuning.
Self-Select: Optimizing Instruction Selection for Large Language Models
[ "Keshav Ramji", "Alexander Kyimpopkin" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kXlTY0BmK3
@inproceedings{ huang2023benchmarking, title={Benchmarking Large Language Models as {AI} Research Agents}, author={Qian Huang and Jian Vora and Percy Liang and Jure Leskovec}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=kXlTY0BmK3} }
Scientific experimentation involves an iterative process of creating hypotheses, designing experiments, running experiments, and analyzing the results. Can we build AI research agents to perform these long-horizon tasks? To take a step towards building and evaluating research agents on such open-ended decision-making tasks, we focus on the problem of machine learning engineering: given a task description and a dataset, build a high-performing model. In this paper, we propose MLAgentBench, a suite of ML tasks for benchmarking AI research agents. Agents can perform actions like reading/writing files, executing code, and inspecting outputs. With these actions, agents could run experiments, analyze the results, and modify the code of entire machine learning pipelines, such as data processing, architecture, training processes, etc. The benchmark then automatically evaluates the agent’s performance objectively over various metrics related to performance and efficiency. We also design an LLM-based research agent to automatically perform experimentation loops in such an environment. Empirically, we find that a GPT-4- based research agent can feasibly build compelling ML models over many tasks in MLAgentBench, displaying highly interpretable plans and actions. However, the success rates vary considerably; they span from almost 90% on well-established older datasets to as low as 10% on recent Kaggle Challenges – unavailable during the LLM model’s pretraining – and even 0% on newer research challenges like BabyLM. Finally, we identify several key challenges for LLM-based research agents such as long-term planning and hallucination. Our code is released at https://anonymous.4open.science/r/MLAgentBench/.
Benchmarking Large Language Models as AI Research Agents
[ "Qian Huang", "Jian Vora", "Percy Liang", "Jure Leskovec" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=jt3il4fC5B
@inproceedings{ assouel2023the, title={The Unsolved Challenges of {LLM}s in Open-Ended Web Tasks: A Case Study}, author={Rim Assouel and Tom Marty and Massimo Caccia and Issam Laradji and Alexandre Drouin and Sai Rajeswar and Hector Palacios and Quentin Cappart and David Vazquez and Nicolas Chapados and Maxime Gasse and Alexandre Lacoste}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=jt3il4fC5B} }
In this work, we investigate the challenges associated with developing goal-driven AI agents capable of performing novel tasks in a web environment using zero-shot learning. Our primary focus is on harnessing the capabilities of large language models (LLMs) as generalist web agents interacting with HTML-based user interfaces (UIs). We evaluate the MiniWoB benchmark and show that it is a suitable yet challenging platform for assessing an agent's ability to comprehend and solve tasks without prior human demonstrations. Our main contribution encompasses a set of extensive experiments where we compare and contrast various agent design considerations, such as action space, observation space, and the choice of LLM, with the aim of shedding light on the bottlenecks and limitations of LLM-based zero-shot learning in this domain, in order to foster research endeavours in this area. In our empirical analysis, we find that: (1) the effectiveness of the different action spaces are notably dependent on the specific LLM used; (2) open-source LLMs hold their own as competitive generalist web agents when compared to their proprietary counterparts; and (3) using an accessibility-based representation for web pages, despite resulting in some performance loss, emerges as a cost-effective strategy, particularly as web page sizes increase.
The Unsolved Challenges of LLMs as Generalist Web Agents: A Case Study
[ "Rim Assouel", "Tom Marty", "Massimo Caccia", "Issam H. Laradji", "Alexandre Drouin", "Sai Rajeswar", "Hector Palacios", "Quentin Cappart", "David Vazquez", "Nicolas Chapados", "Maxime Gasse", "Alexandre Lacoste" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=jnzjnYTSlO
@inproceedings{ liu2023tail, title={{TAIL}: Task-specific Adapters for Imitation Learning with Large Pretrained Models}, author={Zuxin Liu and Jesse Zhang and Kavosh Asadi and Yao Liu and Ding Zhao and Shoham Sabach and Rasool Fakoor}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=jnzjnYTSlO} }
The full potential of large pretrained models remains largely untapped in control domains like robotics. This is mainly because of the scarcity of data and the computational challenges associated with training or fine-tuning these large models for such applications. Prior work mainly emphasizes effective \emph{pretraining} of large models for decision-making, with little exploration into how to perform data-efficient continual \emph{adaptation} of these models for new tasks. Recognizing these constraints, we introduce TAIL (Task-specific Adapters for Imitation Learning), a framework for efficient adaptation to new control tasks. Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques---e.g., Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA)---in TAIL to adapt large pretrained models for new tasks with limited demonstration data. Our extensive experiments comparing prevalent parameter-efficient fine-tuning techniques and adaptation baselines suggest that TAIL with LoRA can achieve the best post-adaptation performance with only 1% of the trainable parameters of full fine-tuning, while avoiding catastrophic forgetting and preserving adaptation plasticity in continual learning settings.
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
[ "Zuxin Liu", "Jesse Zhang", "Kavosh Asadi", "Yao Liu", "Ding Zhao", "Shoham Sabach", "Rasool Fakoor" ]
Workshop/FMDM
2310.05905
[ "" ]
https://huggingface.co/papers/2310.05905
2
2
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=jbLM1yvxaL
@inproceedings{ jin2023mmtomqa, title={{MMT}oM-{QA}: Multimodal Theory of Mind Question Answering}, author={Chuanyang Jin and Yutong Wu and Jing Cao and Jiannan Xiang and Yen-Ling Kuo and Zhiting Hu and Tomer Ullman and Antonio Torralba and Joshua Tenenbaum and Tianmin Shu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=jbLM1yvxaL} }
Theory of Mind (ToM), the cognitive ability to understand people's minds, is an essential ingredient for developing machines with human-level social intelligence. Recent machine learning models, particularly large language models, seem to show some aspects of ToM understanding. However, existing ToM benchmarks use unimodal datasets -- either video or text. Human ToM, on the other hand, is more than video or text understanding. People can flexibly reason about another person's mind based on conceptual representations (e.g., goals, beliefs, plans) extracted from any available data, which can include visual cues, linguistic narratives, or both. To address this, we introduce a multimodal Theory of Mind question answering (MMToM-QA) benchmark. MMToM-QA comprehensively evaluates machine ToM both on multimodal data and on different kinds of unimodal data about a person's activity in a household environment. To engineer multimodal ToM capacity, we propose a novel method, BIP-ALM (Bayesian Inverse Planning Accelerated by Language Models). BIP-ALM extracts unified representations from multimodal data and utilizes language models for scalable Bayesian inverse planning. We conducted a systematic comparison of human performance, BIP-ALM, and state-of-the-art models, including GPT-4. The experiments demonstrate that large language models and large multimodal models still lack robust ToM capacity. BIP-ALM, on the other hand, shows promising results, by leveraging the power of both model-based mental inference and language models.
MMToM-QA: Multimodal Theory of Mind Question Answering
[ "Chuanyang Jin", "Yutong Wu", "Jing Cao", "Jiannan Xiang", "Yen-Ling Kuo", "Zhiting Hu", "Tomer Ullman", "Antonio Torralba", "Joshua B. Tenenbaum", "Tianmin Shu" ]
Workshop/FMDM
2401.08743
[ "https://github.com/chuanyangjin/MMToM-QA" ]
https://huggingface.co/papers/2401.08743
1
1
0
10
[]
[ "Chuanyang-Jin/MMToM-QA" ]
[ "LLM360/de-arena", "tsteffek/de-arena" ]
[]
[ "Chuanyang-Jin/MMToM-QA" ]
[ "LLM360/de-arena", "tsteffek/de-arena" ]
1
poster
null
https://openreview.net/forum?id=iq04KvxmJA
@inproceedings{ ajay2023compositional, title={Compositional Foundation Models for Hierarchical Planning}, author={Anurag Ajay and Seungwook Han and Yilun Du and Shuang Li and Abhi Gupta and Tommi Jaakkola and Joshua Tenenbaum and Leslie Kaelbling and Akash Srivastava and Pulkit Agrawal}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=iq04KvxmJA} }
To make effective decisions in novel environments with long-horizon goals, it is crucial to engage in hierarchical reasoning across spatial and temporal scales. This entails planning abstract subgoal sequences, visually reasoning about the underlying plans, and executing actions in accordance with the devised plan through visual-motor control. We propose *Compositional Foundation Models for Hierarchical Planning* (HiP), a foundation model which leverages multiple *expert* foundation model, trained *individually* on language, vision and action data, jointly together to solve long-horizon tasks. We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model. Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos. To enable effective reasoning within this hierarchy, we enforce consistency between the models via *iterative refinement*. We illustrate the efficacy and adaptability of our approach in three different long-horizon table-top manipulation tasks.
Compositional Foundation Models for Hierarchical Planning
[ "Anurag Ajay", "Seungwook Han", "Yilun Du", "Shuang Li", "Abhi Gupta", "Tommi Jaakkola", "Joshua Tenenbaum", "Leslie Kaelbling", "Akash Srivastava", "Pulkit Agrawal" ]
Workshop/FMDM
2309.08587
[ "" ]
https://huggingface.co/papers/2309.08587
3
9
1
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=ilSesSBQDh
@inproceedings{ yang2023multimodal, title={Multimodal Pretrained Models for Verifiable Sequential Decision-Making: Planning, Grounding, and Perception}, author={Yunhao Yang and Cyrus Neary and ufuk topcu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=ilSesSBQDh} }
Recently developed multimodal pretrained models can encode rich world knowledge expressed in multiple modalities, such as text and images. However, the outputs of these models cannot be integrated into algorithms to solve sequential decision-making tasks. We develop an algorithm that utilizes the knowledge from the pretrained models to construct and verify controllers for sequential decision-making tasks and to ground these controllers to task environments through visual observations. In particular, the algorithm constructs an automaton-based controller that encodes the task-relevant knowledge extracted from the pretrained model. It then verifies whether the knowledge encoded in the controller is consistent with other independently available knowledge, which may include abstract information on the environment or user-provided specifications. If this verification step discovers any inconsistency, the algorithm automatically refines the controller to resolve the inconsistency. Next, the algorithm leverages the vision and language capabilities of pretrained models to ground the controller to the task environment. We demonstrate the algorithm's ability to construct, verify, and ground automaton-based controllers through a suite of real-world tasks.
Multimodal Pretrained Models for Verifiable Sequential Decision-Making: Planning, Grounding, and Perception
[ "Yunhao Yang", "Cyrus Neary", "ufuk topcu" ]
Workshop/FMDM
2308.05295
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=hQERBmmlYm
@inproceedings{ hu2023rfpolicy, title={{RF}-{POLICY}: Rectified Flows are Computation-Adaptive Decision Makers}, author={Xixi Hu and Bo Liu and Xingchao Liu and qiang liu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=hQERBmmlYm} }
Diffusion-based imitation learning improves Behavioral Cloning (BC) on multi-modal decision-making but comes at the cost of significantly slower inference due to the recursion in the diffusion process. However, in real-world scenarios, states that require multi-modal decision-making are rare, and the huge consumption of diffusion models is not necessary for most cases. It inspires us to design efficient policy generators that can wisely allocate computation for different contexts. To address this challenge, we propose RF-POLICY (Rectified Flow-Policy), an imitation learning algorithm based on Rectified Flow, a recent advancement in flow-based generative modeling~\citep{liu2022flow}.RF-POLICY adopts probability flow ordinary differential equations (ODEs) for diverse policy generation, with the learning principle of following straight trajectories as much as possible. We uncover and leverage a surprisingly intriguing advantage of these flow-based models over previous diffusion models: their training objective indicates the uncertainty of a certain state, and when the state is uni-modal, they automatically reduce to one-step generators since the probability flows admit straight lines. Therefore, RF-POLICY is naturally an adaptive decision maker, offering rapid inference without sacrificing diversity. Our comprehensive empirical evaluation shows that \ours{}, to the best of our knowledge, is the first algorithm to achieve high performance across all dimensions, including success rate, behavioral diversity, and inference speed.
RF-POLICY: Rectified Flows are Computation-Adaptive Decision Makers
[ "Xixi Hu", "Bo Liu", "Xingchao Liu", "qiang liu" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ghGXVeXqEt
@inproceedings{ xu2023creative, title={Creative Robot Tool Use with Large Language Models}, author={Mengdi Xu and Wenhao Yu and Peide Huang and Shiqi Liu and Xilun Zhang and Yaru Niu and Tingnan Zhang and Fei Xia and Jie Tan and Ding Zhao}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=ghGXVeXqEt} }
Tool use is a hallmark of advanced intelligence, exemplified in both animal behavior and robotic capabilities. This paper investigates the feasibility of imbuing robots with the ability to creatively use tools in tasks that involve implicit physical constraints and long-term planning. Leveraging Large Language Models (LLMs), we develop RoboTool, a system that accepts natural language instructions and outputs executable code for controlling robots in both simulated and real-world environments. RoboTool incorporates four pivotal components: (i) an “Analyzer” that interprets natural language to discern key task-related concepts, (ii) a “Planner” that generates comprehensive strategies based on the language input and key concepts, (iii) a “Calculator” that computes parameters for each skill, and (iv) a “Coder” that translates these plans into executable Python code. Our results show that RoboTool can not only comprehend implicit physical constraints and environmental factors but also demonstrate creative tool use. Unlike traditional Task and Motion Planning (TAMP) methods that rely on explicit optimization and are confined to formal logic, our LLM-based system offers a more flexible, efficient, and user-friendly solution for complex robotics tasks. Through extensive experiments, we validate that RoboTool is proficient in handling tasks that would otherwise be infeasible without the creative use of tools, thereby expanding the capabilities of robotic systems.
Creative Robot Tool Use with Large Language Models
[ "Mengdi Xu", "Wenhao Yu", "Peide Huang", "Shiqi Liu", "Xilun Zhang", "Yaru Niu", "Tingnan Zhang", "Fei Xia", "Jie Tan", "Ding Zhao" ]
Workshop/FMDM
2310.13065
[ "" ]
https://huggingface.co/papers/2310.13065
4
8
1
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=gGQfkyb0KL
@inproceedings{ valmeekam2023investigating, title={Investigating the Effectiveness of Self-critiquing in {LLM}s solving Planning Tasks}, author={Karthik Valmeekam and Matthew Marquez and Subbarao Kambhampati}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=gGQfkyb0KL} }
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that LLMs, when used as verifiers, produce a notable number of false positives, compromising system reliability. Additionally, self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers. The nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs as verifiers in an iterative, self-critiquing framework for planning tasks.
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
[ "Karthik Valmeekam", "Matthew Marquez", "Subbarao Kambhampati" ]
Workshop/FMDM
2310.08118
[ "" ]
https://huggingface.co/papers/2310.08118
1
1
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=fkz5VfJCwo
@inproceedings{ laradji2023capture, title={Capture the Flag: Uncovering Data Insights with Large Language Models}, author={Issam Laradji and Perouz Taslakian and Sai Rajeswar and Valentina Zantedeschi and Alexandre Lacoste and Nicolas Chapados and David Vazquez and Christopher Pal and Alexandre Drouin}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=fkz5VfJCwo} }
The extraction of a small number of relevant insights from vast amounts of data is a crucial component of data-driven decision-making. However, accomplishing this task requires considerable technical skills, domain expertise, and human labor. This study explores the potential of using Large Language Models (LLMs) to automate the discovery of insights in data, leveraging recent advances in reasoning and code generation techniques. We propose a new evaluation methodology based on a ``capture the flag'' principle, measuring the ability of such models to recognize meaningful and pertinent information (flags) in a dataset. We further propose two proof-of-concept agents, with different inner workings, and compare their ability to capture such flags in a real-world sales dataset. While the work reported here is preliminary, our results are sufficiently interesting to mandate future exploration by the community.
Capture the Flag: Uncovering Data Insights with Large Language Models
[ "Issam H. Laradji", "Perouz Taslakian", "Sai Rajeswar", "Valentina Zantedeschi", "Alexandre Lacoste", "Nicolas Chapados", "David Vazquez", "Christopher Pal", "Alexandre Drouin" ]
Workshop/FMDM
2312.13876
[ "" ]
https://huggingface.co/papers/2312.13876
3
1
1
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=fksglLN3ew
@inproceedings{ messaoud2023sac, title={\$S{\textasciicircum}2{AC}\$: {ENERGY}-{BASED} {REINFORCEMENT} {LEARNING} {WITH} {STEIN} {SOFT} {ACTOR} {CRITIC}}, author={Safa Messaoud and Billel Mokeddem and Zhenghai Xue and Bo An and Haipeng Chen and Sanjay Chawla}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=fksglLN3ew} }
Learning expressive stochastic policies instead of deterministic ones has been proposed to achieve better stability, sample complexity and robustness. Notably, in Maximum Entropy Reinforcement Learning (MaxEnt RL), the policy is modeled as an expressive Energy-Based Model (EBM) over the Q-values. However, this formulation requires the estimation of the entropy of such EBMs, which is an open problem. To address this, previous MaxEnt RL methods either implicitly estimate the entropy, resulting in high computational complexity and variance (SQL), or follow a variational inference procedure that fits simplified actor distributions (e.g., Gaussian) for tractability (SAC). We propose Stein Soft Actor-Critic ($S^2AC$), a MaxEnt RL algorithm that learns expressive policies without compromising efficiency. Specifically, $S^2AC$ uses parameterized Stein Variational Gradient Descent (SVGD) as the underlying policy. We derive a closed-form expression of the entropy of such policies. Our formula is computationally efficient and only depends on first-order derivatives and vector products. Empirical results show that $S^2AC$ yields more optimal solutions to the MaxEnt objective than SQL and SAC in the multi-goal environment, and outperforms SAC and SQL on the MuJoCo benchmark. Our code is available at: \url{https://anonymous.4open.science/r/Stein-Soft-Actor-Critic/}
S^2AC: ENERGY-BASED REINFORCEMENT LEARNING WITH STEIN SOFT ACTOR CRITIC
[ "Safa Messaoud", "Billel Mokeddem", "Zhenghai Xue", "Linsey Pang", "Bo An", "Haipeng Chen", "Sanjay Chawla" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=fXugVDtCQO
@inproceedings{ liu2023ring, title={Ring Attention with Blockwise Transformers for Near-Infinite Context}, author={Hao Liu and Matei Zaharia and Pieter Abbeel}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=fXugVDtCQO} }
Transformers have emerged as the architecture of choice for many state-of-the-art AI models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands imposed by Transformers limit their ability to handle long sequences, thereby creating challenges for tasks involving extended sequences or long-term dependencies. We present a distinct approach, Ring Attention, which leverages blockwise computation of self-attention to distribute long sequences across multiple devices while concurrently overlapping the communication of key-value blocks with the computation of blockwise attention. By processing longer input sequences while maintaining memory efficiency, Ring Attention enables training and inference of sequences that are device count times longer than those of prior memory-efficient Transformers, effectively eliminating the memory constraints imposed by individual devices. Extensive experiments on language modeling tasks demonstrate the effectiveness of Ring Attention in allowing large sequence input size and improving performance.
Ring Attention with Blockwise Transformers for Near-Infinite Context
[ "Hao Liu", "Matei Zaharia", "Pieter Abbeel" ]
Workshop/FMDM
2310.01889
[ "https://github.com/forhaoliu/ringattention" ]
https://huggingface.co/papers/2310.01889
0
10
3
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=egKxRC5gf8
@inproceedings{ roberts2023gptgeo, title={{GPT}4{GEO}: How a Language Model Sees the World{\textquoteright}s Geography}, author={Jonathan Roberts and Timo L{\"u}ddecke and Sowmen Das and Kai Han and Samuel Albanie}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=egKxRC5gf8} }
Large language models (LLMs) have shown remarkable capabilities across a broad range of tasks involving question answering and the generation of coherent text and code. Comprehensively understanding the strengths and weaknesses of LLMs is beneficial for safety, downstream applications and improving performance. In this work, we investigate the degree to which GPT-4 has acquired factual geographic knowledge and is capable of using this knowledge for interpretative reasoning, which is especially important for applications that involve geographic data, such as geospatial analysis, supply chain management, and disaster response. To this end, we design and conduct a series of diverse experiments, starting from factual tasks such as location, distance and elevation estimation to more complex questions such as generating country outlines and travel networks, route finding under constraints and supply chain analysis. We provide a broad characterisation of what GPT-4 knows about the world, highlighting promising and potentially surprising capabilities but also limitations.
GPT4GEO: How a Language Model Sees the World’s Geography
[ "Jonathan Roberts", "Timo Lüddecke", "Sowmen Das", "Kai Han", "Samuel Albanie" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eLlzf6lkGP
@inproceedings{ xu2023visionandlanguage, title={Vision-and-Language Navigation in Real World using Foundation Models}, author={Chengguang Xu and Hieu Trung Nguyen and Christopher Amato and Lawson Wong}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=eLlzf6lkGP} }
When mobile robots become ubiquitous, they occasionally encounter unseen environments. Enhancing mobile robots with the ability to follow language instructions will improve decision-making efficiency in previously unseen scenarios. However, state-of-the-art (SOTA) vision-and-language navigation (VLN) methods are mainly evaluated in simulation, neglecting the complex real world. Directly transferring SOTA navigation policies learned in simulation to the real world is challenging due to the visual domain gap and the absence of prior knowledge about unseen environments. In this work, we propose a novel navigation framework to address the VLN task in the real world, utilizing the powerful foundation models. Specifically, the proposed framework includes four key components: (1) a large language models (LLMs) based instruction parser that converts a language instruction into a sequence of pre-defined macro-action descriptions, (2) an online visual-language mapper that builds a spatial and semantic map of the unseen environment using large visual-language models (VLMs), (3) a language indexing-based localizer that grounds each macro-action description to a waypoint location on the map, and (4) a pre-trained DD-PPO-based local controller that predicts the action. Evaluated on an Interbotix LoCoBot WX250 in an unseen lab environment, without any fine-tuning, our framework significantly outperforms the SOTA VLN baseline in the real world.
Vision-and-Language Navigation in Real World using Foundation Models
[ "Chengguang Xu", "Hieu Trung Nguyen", "Christopher Amato", "Lawson L.S. Wong" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=dcMQ6AYURP
@inproceedings{ wang2023dfields, title={D\${\textasciicircum}3\$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation}, author={Yixuan Wang and Zhuoran Li and Mingtong Zhang and Katherine Driggs-Campbell and Jiajun Wu and Li Fei-Fei and Yunzhu Li}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=dcMQ6AYURP} }
Scene representation has been a crucial design choice in robotic manipulation systems. An ideal representation should be 3D, dynamic, and semantic to meet the demands of diverse manipulation tasks. However, previous works often lack all three properties simultaneously. In this work, we introduce D$^3$Fields — dynamic 3D descriptor fields. These fields capture the dynamics of the underlying 3D environment and encode both semantic features and instance masks. Specifically, we project arbitrary 3D points in the workspace onto multi-view 2D visual observations and interpolate features derived from foundational models. The resulting fused descriptor fields allow for flexible goal specifications using 2D images with varied contexts, styles, and instances. To evaluate the effectiveness of these descriptor fields, we apply our representation to a wide range of robotic manipulation tasks in a zero-shot manner. Through extensive evaluation in both real-world scenarios and simulations, we demonstrate that D$^3$Fields are both generalizable and effective for zero-shot robotic manipulation tasks. In quantitative comparisons with state-of-the-art dense descriptors, such as Dense Object Nets and DINO, D$^3$Fields exhibit significantly better generalization abilities and manipulation accuracy.
D^3Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation
[ "Yixuan Wang", "Zhuoran Li", "Mingtong Zhang", "Katherine Rose Driggs-Campbell", "Jiajun Wu", "Li Fei-Fei", "Yunzhu Li" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=dXuF8gczZV
@inproceedings{ black2023zeroshot, title={Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models}, author={Kevin Black and Mitsuhiko Nakamoto and Pranav Atreya and Homer Walke and Chelsea Finn and Aviral Kumar and Sergey Levine}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=dXuF8gczZV} }
If generalist robots are to operate in truly unstructured environments, they need to be able to recognize and reason about novel objects and scenarios. Such objects and scenarios might not be present in the robot's own training data. We propose SuSIE, a method that leverages an image editing diffusion model to act as a high-level planner by proposing intermediate subgoals that a low-level controller attains. Specifically, we fine-tune InstructPix2Pix on robot data such that it outputs a hypothetical future observation given the robot's current observation and a language command. We then use the same robot data to train a low-level goal-conditioned policy to reach a given image observation. We find that when these components are combined, the resulting system exhibits robust generalization capabilities. The high-level planner utilizes its Internet-scale pre-training and visual understanding to guide the low-level goal-conditioned policy, achieving significantly better generalization than conventional language-conditioned policies. We demonstrate that this approach solves real robot control tasks involving novel objects, distractors, and even environments, both in the real world and in simulation. The project website can be found at https://subgoal-image-editing.github.io
Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models
[ "Kevin Black", "Mitsuhiko Nakamoto", "Pranav Atreya", "Homer Walke", "Chelsea Finn", "Aviral Kumar", "Sergey Levine" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=d5ogyvdl1X
@inproceedings{ xu2023on, title={On the Tool Manipulation Capability of Open-sourced Large Language Models}, author={Qiantong Xu and Fenglu Hong and Bo Li and Changran Hu and Zhengyu Chen and Jian Zhang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=d5ogyvdl1X} }
Recent studies on software tool manipulation with large language models (LLMs) mostly rely on closed model APIs. The industrial adoption of these models is substantially constrained due to the security and robustness risks in exposing information to closed LLM API services. In this paper, we ask can we enhance open-source LLMs to be competitive to leading closed LLM APIs in tool manipulation, with practical amount of human supervision. By analyzing common tool manipulation failures, we first demonstrate that open-source LLMs may require training with usage examples, in-context demonstration and generation style regulation to resolve failures. These insights motivate us to revisit classical methods in LLM literature, and demonstrate that we can adapt them as model alignment with programmatic data generation, system prompts and in-context demonstration retrievers to enhance open-source LLMs for tool manipulation. To evaluate these techniques, we create ToolBench, a tool manipulation benchmark consisting of diverse software tools for real-world tasks. We demonstrate that our techniques can boost leading open-source LLMs by up to 94% success rate, showing capabilities competitive to OpenAI GPT-4 in 4 out of 8 ToolBench tasks. We show that such enhancement typically requires about one developer day to curate data for each tool, rendering a recipe with practical amount of human supervision.
On the Tool Manipulation Capability of Open-sourced Large Language Models
[ "Qiantong Xu", "Fenglu Hong", "Bo Li", "Changran Hu", "Zhengyu Chen", "Jian Zhang" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=d0A2pc2kFp
@inproceedings{ bairi2023codeplan, title={CodePlan: Repository-level Coding using {LLM}s and Planning}, author={Ramakrishna Bairi and Atharv Sonwane and Aditya Kanade and Vageesh D C and Arun Iyer and Suresh Parthasarathy and Sriram Rajamani and B. Ashok and Shashank Shet}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=d0A2pc2kFp} }
Software engineering activities such as package migration, fixing error reports from static analysis or testing, and adding type annotations or other specifications to a codebase, involve pervasively editing the entire repository of code. While Large Language Models (LLMs) have shown impressive abilities in localized coding tasks, performing interdependent edits across a repository requires multi-step reasoning and planning abilities. We frame repository-level coding as a planning problem and present a task-agnostic, neuro-symbolic framework called CodePlan. Our framework leverages static analysis techniques to discover dependencies throughout the repository, which are utilised in providing sufficient context to the LLM along with determining the sequence of edits required to solve the repository-level task. We evaluate the effectiveness of CodePlan on two repository-level tasks: package migration (C\#) and temporal code edits (Python) across multiple repositories. Our results demonstrate CodePlan consistently beats baselines across tasks. Further qualitative analysis is performed to highlight how different components of the approach contribute in guiding the LLM towards the correct edits as well as maintaining the consistency of the repository.
CodePlan: Repository-level Coding using LLMs and Planning
[ "Ramakrishna Bairi", "Atharv Sonwane", "Aditya Kanade", "Vageesh D C", "Arun Iyer", "Suresh Parthasarathy", "Sriram Rajamani", "B. Ashok", "Shashank Shet" ]
Workshop/FMDM
2309.12499
[ "" ]
https://huggingface.co/papers/2309.12499
7
73
13
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=csvGpPDxgT
@inproceedings{ liu2023exploration, title={Exploration with Principles for Diverse {AI} Supervision}, author={Hao Liu and Matei Zaharia and Pieter Abbeel}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=csvGpPDxgT} }
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI. While this generative AI approach has produced impressive results, it heavily leans on human supervision. Even state-of-the-art AI models like ChatGPT depend on fine-tuning through human demonstrations, demanding extensive human input and domain expertise. This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation. To address this limitation, we propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data. Drawing inspiration from the principles of unsupervised reinforcement learning (RL) pretraining, EAI achieves exploration within the natural language space. We accomplish this by harnessing large language models to assess the novelty of generated content. Our approach employs two key components: an actor that generates novel content and a critic that evaluates the generated content, offering critiques to guide the actor. Empirical evaluations demonstrate that EAI significantly boosts model performance on complex reasoning tasks, addressing the limitations of human-intensive supervision.
Exploration with Principles for Diverse AI Supervision
[ "Hao Liu", "Matei Zaharia", "Pieter Abbeel" ]
Workshop/FMDM
2310.08899
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=bisrdS6We9
@inproceedings{ liu2023reason, title={Reason for Future, Act for Now: A Principled Architecture for Autonomous {LLM} Agents}, author={Zhihan Liu and Hao Hu and Shenao Zhang and Hongyi Guo and Shuqi Ke and Boyi Liu and Zhaoran Wang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=bisrdS6We9} }
Large language models (LLMs) demonstrate impressive reasoning abilities, but translating reasoning into actions in the real world remains challenging. In particular, it remains unclear how to complete a given task provably within a minimum number of interactions with the external environment, e.g., through an internal mechanism of reasoning. To this end, we propose a principled framework with provable regret guarantees to orchestrate reasoning and acting, which we call "reason for future, act for now" (RAFA). Specifically, we design a prompt template for reasoning that learns from the memory buffer and plans a future trajectory over a long horizon ("reason for future"). At each step, the LLM agent takes the initial action of the planned trajectory ("act for now"), stores the collected feedback in the memory buffer, and reinvokes the reasoning routine to replan the future trajectory from the new state. The key idea is to cast reasoning in LLMs as learning and planning in Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt LLMs to form an updated posterior of the unknown environment from the memory buffer (learning) and generate an optimal trajectory for multiple future steps that maximizes a value function (planning). The learning and planning subroutines are performed in an "in-context" manner to emulate the actor-critic update for MDPs. Our theoretical analysis proves that the novel combination of long-term reasoning and short-term acting achieves a $\sqrt T$ regret. In particular, the regret bound highlights an intriguing interplay between the prior knowledge obtained through pretraining and the uncertainty reduction achieved by reasoning and acting. Our empirical validation shows that it outperforms various existing frameworks and achieves nearly perfect scores on a few benchmarks. By incorporating "classical" MDP techniques, RAFA introduces the first autonomous LLM agent with provable regret guarantees.
Reason for Future, Act for Now: A Principled Architecture for Autonomous LLM Agents
[ "Zhihan Liu", "Hao Hu", "Shenao Zhang", "Hongyi Guo", "Shuqi Ke", "Boyi Liu", "Zhaoran Wang" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=agA4vmhePk
@inproceedings{ sun2023adaplanner, title={AdaPlanner: Adaptive Planning from Feedback with Language Models}, author={Haotian Sun and Yuchen Zhuang and Lingkai Kong and Bo Dai and Chao Zhang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=agA4vmhePk} }
Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that are not adaptable to environmental feedback. Consequently, the sequential decision-making performance of LLM agents degenerates with problem complexity and plan horizons increase. We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore, we propose a skill discovery mechanism that leverages successful plans as few-shot exemplars, enabling the agent to plan and refine with fewer task demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively.
AdaPlanner: Adaptive Planning from Feedback with Language Models
[ "Haotian Sun", "Yuchen Zhuang", "Lingkai Kong", "Bo Dai", "Chao Zhang" ]
Workshop/FMDM
2305.16653
[ "https://github.com/haotiansun14/adaplanner" ]
https://huggingface.co/papers/2305.16653
4
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=acgxF5AVLq
@inproceedings{ furuta2023language, title={Language Model Agents Suffer from Compositional Decision Making}, author={Hiroki Furuta and Yutaka Matsuo and Aleksandra Faust and Izzeddin Gur}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=acgxF5AVLq} }
Language model agents (LMA) recently emerged as a promising paradigm on muti-step decision making tasks, often outperforming humans and other reinforcement learning agents. Despite the promise, their performance on real-world applications that often involve combinations of tasks is still underexplored. In this work, we introduce a new benchmark, called CompWoB -- 50 new compositional web automation tasks reflecting more realistic assumptions. We show that while existing prompted LMAs (gpt-3.5-turbo or gpt-4) achieve 94.0% average success rate on base tasks, their performance degrades to 24.9% success rate on compositional tasks. On the other hand, transferred LMAs (finetuned only on base tasks) show less generalization gap, dropping from 85.4% to 54.8%. By balancing data distribution across tasks, we train a new model, HTML-T5++, that surpasses human-level performance (95.2%) on MiniWoB, and achieves the best zero-shot performance on CompWoB (61.0%). While these highlight the promise of small-scale finetuned and transferred models for compositional generalization, their performance further degrades under different instruction compositions changing combinational order. In contrast to the recent remarkable success of LMA, our benchmark and detailed analysis emphasize the necessity of building LMAs that are robust and generalizable to task compositionality for real-world deployment.
Language Model Agents Suffer from Compositional Generalization in Web Automation
[ "Hiroki Furuta", "Yutaka Matsuo", "Aleksandra Faust", "Izzeddin Gur" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=YVZ03XsMfg
@inproceedings{ remy2023semanticallydriven, title={Semantically-Driven Object Search Using Partially Observed 3D Scene Graphs}, author={Isaac Remy and Abhishek Gupta and Karen Leung}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=YVZ03XsMfg} }
Object search is a fundamental task for service robots aiding humans in their daily lives. For example, a robot must locate a cup before pouring coffee, or locate a sponge before cleaning up a spill. As such, robots performing object search across many different and potentially unseen environments must reason about uncertainty in both environment layout and object location. In this work, we frame object search as a Partially Observable Markov Decision Process (POMDP), and propose a generalizable planner that combines the structured representations afforded by 3D scene graphs with the semantic knowledge of language models. Specifically, we introduce (i) 3DSG-POMDPs, which are POMDPs defined over 3D scene graphs that reduce the dimensionality of object search, and (ii) PROPHE-C, a sampling-based planner for solving 3DSG-POMDPS. We demonstrate the efficacy of PROPHE-C in a partially observable household environment, revealing that additional online inference leads to more efficient and exploratory search plans, compared to solely relying on language models for decision-making.
Semantically-Driven Object Search Using Partially Observed 3D Scene Graphs
[ "Isaac Remy", "Abhishek Gupta", "Karen Leung" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=YSYbTPbCPD
@inproceedings{ kim2023prospector, title={Prospector: Improving {LLM} Agents with Self-Asking and Trajectory Ranking}, author={Byoungjip Kim and Youngsoo Jang and Lajanugen Logeswaran and Geon-Hyeong Kim and Yu Jin Kim and Honglak Lee and Moontae Lee}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=YSYbTPbCPD} }
Large language models (LLMs) have shown the ability to solve complex decision-making tasks beyond the natural language processing tasks. Current LLM agents such as ReAct can solve interactive decision-making tasks by imitating the few-shot demonstrations given in the prompt. The LLM agents based on few-shot in-context learning (ICL) achieve surprisingly high performance without training. Despite the simplicity and generalizability, the ICL-based approaches lack optimizing trajectories based on the reward from an environment. In this paper, we introduce Prospector, a LLM agent that consists of two complementary LLMs such as the LLM Actor and LLM Critic. To elicit more proper actions from the LLM Actor, we provide AskAct prompting that interleaves additional self-asking steps in the few-shot demonstrations. Furthermore, to take advantages of the stochasticity of LLMs, we provide Trajectory Ranking in which the LLM Actor generates diverse (creative) trajectories at high temperature and the LLM Critic selects the most rewarding trajectory by predicting the expected total reward of each trajectory. On the representative decision-making benchmark environments such as ALFWorld and WebShop, we empirically demonstrate that Prospector can considerably increase the success rate of given tasks, while outperforming recent advancements such as ReAct and Reflexion.
Prospector: Improving LLM Agents with Self-Asking and Trajectory Ranking
[ "Byoungjip Kim", "Youngsoo Jang", "Lajanugen Logeswaran", "Geon-Hyeong Kim", "Yu Jin Kim", "Honglak Lee", "Moontae Lee" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XT1phTGH76
@inproceedings{ zheng2023textttpremiertaco, title={\${\textbackslash}texttt\{{PREMIER}-{TACO}\}\$ is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss}, author={Ruijie Zheng and Yongyuan Liang and Xiyao Wang and Shuang Ma and Hal Daum{\'e} III and Huazhe Xu and John Langford and Praveen Palanisamy and Kalyan Basu and Furong Huang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=XT1phTGH76} }
We introduce $\texttt{Premier-TACO}$, a novel multitask feature representation learning methodology aiming to enhance the efficiency of few-shot policy learning in sequential decision-making tasks. $\texttt{Premier-TACO}$ pretrains a general feature representation using a small subset of relevant multitask offline datasets, capturing essential environmental dynamics. This representation can then be fine-tuned to specific tasks with few expert demonstrations. Building upon the recent temporal action contrastive learning (TACO) objective, which obtains the state of art performance in visual control tasks, $\texttt{Premier-TACO}$ additionally employs a simple yet effective negative example sampling strategy. This key modification ensures computational efficiency and scalability for large-scale multitask offline pretraining. Experimental results from both Deepmind Control Suite and MetaWorld domains underscore the effectiveness of $\texttt{Premier-TACO}$ for pretraining visual representation, facilitating efficient few-shot imitation learning of unseen tasks. On the DeepMind Control Suite, $\texttt{Premier-TACO}$ achieves an average improvement of 101% in comparison to a carefully implemented Learn-from-scratch baseline, and a 24% improvement compared with the most effective baseline pretraining method. Similarly, on MetaWorld, $\texttt{Premier-TACO}$ obtains an average advancement of 74% against Learn-from-scratch and a 40% increase in comparison to the best baseline pretraining method.
is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss
[ "Ruijie Zheng", "Yongyuan Liang", "Xiyao Wang", "Shuang Ma", "Hal Daumé III", "Huazhe Xu", "John Langford", "Praveen Palanisamy", "Kalyan Basu", "Furong Huang" ]
Workshop/FMDM
[ "https://github.com/premiertaco/premier-taco" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VIxIiabqWj
@inproceedings{ dalal2023planseqlearn, title={Plan-Seq-Learn: Language Model Guided {RL} for Solving Long Horizon Robotics Tasks}, author={Murtaza Dalal and Tarun Chiruvolu and Devendra Singh Chaplot and Ruslan Salakhutdinov}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=VIxIiabqWj} }
Large Language Models (LLMs) have been shown to be capable of performing high-level planning for long-horizon robotics tasks, yet existing methods require access to a pre-defined skill library (_e.g._ picking, placing, pulling, pushing, navigating). However, LLM planning does not address how to design or learn those behaviors, which remains challenging particularly in long-horizon settings. Furthermore, for many tasks of interest, the robot needs to be able to adjust its behavior in a fine-grained manner, requiring the agent to be capable of modifying _low-level_ control actions. Can we instead use the internet-scale knowledge from LLMs for high-level policies, guiding reinforcement learning (RL) policies to efficiently solve robotic control tasks online without requiring a pre-determined set of skills? In this paper, we propose *Plan-Seq-Learn* (PSL): a modular approach that uses motion planning to bridge the gap between abstract language and learned low-level control for solving long-horizon robotics tasks from scratch. We demonstrate that PSL is capable of solving 25+ challenging single and multi-stage robotics tasks on four benchmarks at success rates of over 85\% from raw visual input, out-performing language-based, classical, and end-to-end approaches. Video results and code at https://mihdalal.github.io/planseqlearn/
Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks
[ "Murtaza Dalal", "Tarun Chiruvolu", "Devendra Singh Chaplot", "Ruslan Salakhutdinov" ]
Workshop/FMDM
2405.01534
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UtcuS52dwJ
@inproceedings{ park2023metra, title={{METRA}: Scalable Unsupervised {RL} with Metric-Aware Abstraction}, author={Seohong Park and Oleh Rybkin and Sergey Levine}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=UtcuS52dwJ} }
Unsupervised pre-training strategies have proven to be highly effective in natural language processing and computer vision. Likewise, unsupervised reinforcement learning (RL) holds the promise of discovering a variety of potentially useful behaviors that can accelerate the learning of a wide array of downstream tasks. Previous unsupervised RL approaches have mainly focused on pure exploration and mutual information skill learning. However, despite the previous attempts, making unsupervised RL truly scalable still remains a major open challenge: pure exploration approaches might struggle in complex environments with large state spaces, where covering every possible transition is infeasible, and mutual information skill learning approaches might completely fail to explore the environment due to the lack of incentives. To make unsupervised RL scalable to complex, high-dimensional environments, we propose a novel unsupervised RL objective, which we call **Metric-Aware Abstraction (METRA)**. Our main idea is, instead of directly covering the state space, to only cover a compact latent space $\mathcal{Z}$ that is *metrically* connected to the state space $\mathcal{S}$ by temporal distances. By learning to move in every direction in the latent space, METRA obtains a tractable set of diverse behaviors that approximately cover the state space, being scalable to high-dimensional environments. Through our experiments in five locomotion and manipulation environments, we demonstrate that METRA can discover a variety of useful behaviors even in complex, pixel-based environments, being the *first* unsupervised RL method that discovers diverse locomotion behaviors in pixel-based Quadruped and Humanoid. Our code and video are available at https://sites.google.com/view/metra0
METRA: Scalable Unsupervised RL with Metric-Aware Abstraction
[ "Seohong Park", "Oleh Rybkin", "Sergey Levine" ]
Workshop/FMDM
2310.08887
[ "https://github.com/seohongpark/metra" ]
https://huggingface.co/papers/2310.08887
1
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=UG384tFfKY
@inproceedings{ kim2023decision, title={Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making}, author={Jeonghye Kim and Suyoung Lee and Woojun Kim and Youngchul Sung}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=UG384tFfKY} }
The recent success of Transformer in natural language processing has sparked its use in various domains. In offline reinforcement learning (RL), Decision Transformer (DT) is emerging as a promising model based on Transformer. However, we discovered that the attention module of DT is not appropriate to capture the inherent local dependence pattern in trajectories of RL modeled as a Markov decision process. To overcome the limitations of DT, we propose a novel action sequence predictor, named Decision ConvFormer (DC), based on the architecture of MetaFormer, which is a general structure to process multiple entities in parallel and understand the interrelationship among the multiple entities. DC employs local convolution filtering as the token mixer and can effectively capture the inherent local associations of the RL dataset. In extensive experiments, DC achieved state-of-the-art performance across various standard RL benchmarks while requiring fewer resources. Furthermore, we show that DC better understands the underlying meaning in data and exhibits enhanced generalization capability.
Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making
[ "Jeonghye Kim", "Suyoung Lee", "Woojun Kim", "Youngchul Sung" ]
Workshop/FMDM
2310.03022
[ "" ]
https://huggingface.co/papers/2310.03022
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=TXFqUx8aA8
@inproceedings{ coste2023reward, title={Reward Model Ensembles Help Mitigate Overoptimization}, author={Thomas Coste and Usman Anwar and Robert Kirk and David Krueger}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=TXFqUx8aA8} }
Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the "true" reward, these learned reward models are susceptible to overoptimization. Gao et al. studied this phenomenon in a synthetic human feedback setup with a significantly larger "gold" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization.
Reward Model Ensembles Help Mitigate Overoptimization
[ "Thomas Coste", "Usman Anwar", "Robert Kirk", "David Krueger" ]
Workshop/FMDM
2310.02743
[ "https://github.com/tlc4418/llm_optimization" ]
https://huggingface.co/papers/2310.02743
2
1
0
4
[ "tlc4418/pythia_1.4b_sft_policy" ]
[ "tlc4418/1.4b-policy_preference_data_gold_labelled", "tlc4418/gold_labelled_gens", "SJTUwanyi/rm_pref" ]
[]
[ "tlc4418/pythia_1.4b_sft_policy" ]
[ "tlc4418/1.4b-policy_preference_data_gold_labelled", "tlc4418/gold_labelled_gens", "SJTUwanyi/rm_pref" ]
[]
1
poster
null
https://openreview.net/forum?id=SoqrGVLeIY
@inproceedings{ zhang2023universal, title={Universal Visual Decomposer: Long-Horizon Manipulation Made Easy}, author={Zichen Zhang and Yunshuang Li and Osbert Bastani and Abhishek Gupta and Dinesh Jayaraman and Yecheng Jason Ma and Luca Weihs}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=SoqrGVLeIY} }
Real-world robotic tasks stretch over extended horizons and encompass multiple stages. Learning long-horizon manipulation tasks, however, is a long-standing challenge, and demands decomposing the overarching task into several manageable subtasks to facilitate policy learning and generalization to unseen tasks. Prior task decomposition methods require task-specific knowledge, are computationally intensive, and cannot readily be applied to new tasks. To address these shortcomings, we propose Universal Visual Decomposer (UVD), an off-the-shelf task decomposition method for visual long-horizon manipulation using pre-trained visual representations designed for robotic control. At a high level, UVD discovers subgoals by detecting phase shifts in the embedding space of the pre-trained representation. Operating purely on visual demonstrations without auxiliary information, UVD can effectively extract visual subgoals embedded in the videos, while incurring zero additional training cost on top of standard visuomotor policy training. Goal-conditioned policies learned with UVD-discovered subgoals exhibit significantly improved compositional generalization at test time to unseen tasks. Furthermore, UVD-discovered subgoals can be used to construct goal-based reward shaping that jump-starts temporally extended exploration for reinforcement learning. We extensively evaluate UVD on both simulation and real-world tasks, and in all cases, UVD substantially outperforms baselines across imitation and reinforcement learning settings on in-domain and out-of-domain task sequences alike, validating the clear advantage of automated visual task decomposition within the simple, compact UVD framework.
Universal Visual Decomposer: Long-Horizon Manipulation Made Easy
[ "Zichen Zhang", "Yunshuang Li", "Osbert Bastani", "Abhishek Gupta", "Dinesh Jayaraman", "Yecheng Jason Ma", "Luca Weihs" ]
Workshop/FMDM
2310.08581
[ "" ]
https://huggingface.co/papers/2310.08581
2
1
0
7
[]
[ "zcczhang/UVD" ]
[]
[]
[ "zcczhang/UVD" ]
[]
1
poster
null
https://openreview.net/forum?id=SHNjk4h0jn
@inproceedings{ pirotta2023fast, title={Fast Imitation via Behavior Foundation Models}, author={Matteo Pirotta and Andrea Tirinzoni and Ahmed Touati and Alessandro Lazaric and Yann Ollivier}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=SHNjk4h0jn} }
Imitation learning (IL) aims at producing agents that can imitate any behavior given a few expert demonstrations. Yet existing approaches require many demonstrations and/or running (online or offline) reinforcement learning (RL) algorithms for each new imitation task. Here we show that recent RL foundation models based on successor measures can imitate any expert behavior almost instantly with just a few demonstrations and no need for RL or fine-tuning, while accommodating several IL principles (behavioral cloning, feature matching, reward-based, and goal-based reductions). In our experiments, imitation via RL foundation models matches, and often surpasses, the performance of SOTA offline IL algorithms, and produces imitation policies from new demonstrations within seconds instead of hours.
Fast Imitation via Behavior Foundation Models
[ "Matteo Pirotta", "Andrea Tirinzoni", "Ahmed Touati", "Alessandro Lazaric", "Yann Ollivier" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=RjfWRfKWqt
@inproceedings{ xiao2023od, title={O3D: Offline Data-driven Discovery and Distillation for Sequential Decision-Making with Large Language models}, author={Yuchen Xiao and Yanchao Sun and Mengda Xu and Udari Madhushani and Jared Vann and Deepeka Garg and Sumitra Ganesh}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=RjfWRfKWqt} }
Recent advancements in large language models (LLMs) have exhibited promising performance in solving sequential decision-making problems. By imitating few-shot examples provided in the prompts (i.e., in-context learning), an LLM agent can interact with an external environment and complete given tasks without additional training. However, such few-shot examples are often insufficient to generate high quality solutions for complex and long-horizon tasks, while the limited context length cannot consume larger-scale demonstrations. To this end, we propose an offline learning framework that utilizes offline data at scale (e.g, logs of human interactions) to facilitate the in-context learning performance of LLM agents. We formally define LLM-powered policies with both text-based approaches and code-based approaches. We then introduce an Offline Data-driven Discovery and Distillation (O3D) framework to improve LLM-powered policies without finetuning. O3D automatically discovers reusable skills and distills generalizable knowledge across multiple tasks based on offline interaction data, advancing the capability of solving downstream tasks. Empirical results under two interactive decision-making benchmarks (ALFWorld and WebShop) demonstrate that O3D can notably enhance the decision-making capabilities of LLMs through the offline discovery and distillation process, and consistently outperform baselines across various LLMs with both text-based-policy and code-based-policy.
O3D: Offline Data-driven Discovery and Distillation for Sequential Decision-Making with Large Language models
[ "Yuchen Xiao", "Yanchao Sun", "Mengda Xu", "Udari Madhushani", "Jared Vann", "Deepeka Garg", "Sumitra Ganesh" ]
Workshop/FMDM
2310.14403
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Rfc9zK6PNO
@inproceedings{ bhateja2023robotic, title={Robotic Offline {RL} from Internet Videos via Value-Function Pre-Training}, author={Chethan Bhateja and Derek Guo and Dibya Ghosh and Anikait Singh and Manan Tomar and Quan Vuong and Yevgen Chebotar and Sergey Levine and Aviral Kumar}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=Rfc9zK6PNO} }
Pre-training on Internet data has proven to be a key ingredient for broad generalization in many modern ML systems. What would it take to enable such capabilities in robotic reinforcement learning (RL)? Offline RL methods, which learn from datasets of robot experience, offer one way to leverage prior data into the robotic learning pipeline. However, these methods have a "type mismatch" with video data (such as Ego4D), the largest prior datasets available for robotics, since video offers observation-only experience without the action or reward annotations needed for RL methods. In this paper, we develop a system for leveraging large-scale human video datasets in robotic offline RL, based entirely on learning value functions via temporal-difference learning. We show that value learning on video datasets learns representations that are more conducive to downstream robotic offline RL than other approaches for learning from video data. Our system, called V-PTR, combines the benefits of pre-training on video data with robotic offline RL approaches that train on diverse robot data, resulting in value functions and policies for manipulation tasks that perform better, act robustly, and generalize broadly. On several manipulation tasks on a real WidowX robot, our framework produces policies that greatly improve over prior methods. Our video and additional details can be found at https://dibyaghosh.com/vptr/index.html.
Robotic Offline RL from Internet Videos via Value-Function Pre-Training
[ "Chethan Anand Bhateja", "Derek Guo", "Dibya Ghosh", "Anikait Singh", "Manan Tomar", "Quan Vuong", "Yevgen Chebotar", "Sergey Levine", "Aviral Kumar" ]
Workshop/FMDM
2309.13041
[ "" ]
https://huggingface.co/papers/2309.13041
6
8
0
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=R1I94rrgDz
@inproceedings{ sermanet2023robovqa, title={Robo{VQA}: Multimodal Long-Horizon Reasoningfor Robotics}, author={Pierre Sermanet and Tianli Ding and Jeffrey Zhao and Fei Xia and Debidatta Dwibedi and Keerthana Gopalakrishnan and Christine Chan and Gabriel Dulac-Arnold and sharath maddineni and Nikhil Joshi and Pete Florence and Wei Han and Robert Baruch and Yao Lu and Suvir Mirchandani and Peng Xu and Pannag Sanketi and Karol Hausman and Izhak Shafran and brian ichter and Yuan Cao}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=R1I94rrgDz} }
We present a scalable, bottom-up and intrinsically diverse data collection scheme that can be used for high-level reasoning with long and medium horizons and that has 2.2x higher throughput compared to traditional narrow top-down step-by-step collection. We collect realistic data by performing any user requests within the entirety of 3 office buildings and using multiple embodiments (robot, human, human with grasping tool). With this data, we show that models trained on all embodiments perform better than ones trained on the robot data only, even when evaluated solely on robot episodes. We explore the economics of collection costs and find that for a fixed budget it is beneficial to take advantage of the cheaper human collection along with robot collection. We release a large and highly diverse (29,520 unique instructions) dataset dubbed \algname{} containing 829,502 (video, text) pairs for robotics-focused visual question answering. We also demonstrate how evaluating real robot experiments with an intervention mechanism enables performing tasks to completion, making it deployable with human oversight even if imperfect while also providing a single performance metric. We demonstrate a single video-conditioned model named \modelname{} trained on our dataset that is capable of performing a variety of grounded high-level reasoning tasks in broad realistic settings with a cognitive intervention rate 46\% lower than the zero-shot state of the art visual language model (VLM) baseline and is able to guide real robots through long-horizon tasks. The performance gap with zero-shot state-of-the-art models indicates that a lot of grounded data remains to be collected for real-world deployment, emphasizing the critical need for scalable data collection approaches. Finally, we show that video VLMs significantly outperform single-image VLMs with an average error rate reduction of 19\% across all VQA tasks. Thanks to video conditioning and dataset diversity, the model can be used as general video value functions (e.g. success and affordance) in situations where actions needs to be recognized rather than states, expanding capabilities and environment understanding for robots. Data and videos are available at anonymous-robovqa.github.io
RoboVQA: Multimodal Long-Horizon Reasoningfor Robotics
[ "Pierre Sermanet", "Tianli Ding", "Jeffrey Zhao", "Fei Xia", "Debidatta Dwibedi", "Keerthana Gopalakrishnan", "Christine Chan", "Gabriel Dulac-Arnold", "sharath maddineni", "Nikhil Joshi", "Pete Florence", "Wei Han", "Robert Baruch", "Yao Lu", "Suvir Mirchandani", "Peng Xu", "Pannag Sanketi", "Karol Hausman", "Izhak Shafran", "brian ichter", "Yuan Cao" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QW4eGh5GT3
@inproceedings{ nie2023importance, title={Importance of Directional Feedback for {LLM}-based Optimizers}, author={Allen Nie and Ching-An Cheng and Andrey Kolobov and Adith Swaminathan}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=QW4eGh5GT3} }
We study the potential of using large language models (LLMs) as an interactive optimizer for solving maximization problems on a text space using natural language and numerical feedback. Inspired by the classical optimization literature, we classify the natural language feedback into directional and non-directional, where the former is a generalization of the first-order feedback to the natural language space. We find that LLMs are especially capable of optimization when they are provided with {directional feedback}. Based on this insight, we design a new LLM-based optimizer that synthesizes directional feedback from the historical optimization trace to achieve reliable improvement over iterations. Empirically, we show our LLM-based optimizer is more stable and efficient in solving optimization problems, from maximizing mathematical functions to optimizing prompts for writing poems, compared with existing techniques.
Importance of Directional Feedback for LLM-based Optimizers
[ "Allen Nie", "Ching-An Cheng", "Andrey Kolobov", "Adith Swaminathan" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QO2f1XqgiR
@inproceedings{ lifshitz2023steve, title={{STEVE}-1: A Generative Model for Text-to-Behavior in Minecraft}, author={Shalev Lifshitz and Keiran Paster and Harris Chan and Jimmy Ba and Sheila McIlraith}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=QO2f1XqgiR} }
Constructing AI models that respond to text instructions is challenging, especially for sequential decision-making tasks. This work introduces an instruction-tuned Video Pretraining (VPT) model for Minecraft called STEVE-1, demonstrating that the unCLIP approach, utilized in DALL•E 2, is also effective for creating instruction-following sequential decision-making agents. STEVE-1 is trained in two steps: adapting the pretrained VPT model to follow commands in MineCLIP's latent space, then training a prior to predict latent codes from text. This allows us to finetune VPT through self-supervised behavioral cloning and hindsight relabeling, bypassing the need for costly human text annotations. By leveraging pretrained models like VPT and MineCLIP and employing best practices from text-conditioned image generation, STEVE-1 costs just $60 to train and can follow short-horizon open-ended text and visual instructions in Minecraft. STEVE-1 sets a new bar for open-ended instruction following in Minecraft with low-level controls (mouse and keyboard) and raw pixel inputs, far outperforming previous baselines and robustly completing 12 of 13 tasks in our early-game evaluation suite. We provide experimental evidence highlighting key factors for downstream performance, including pretraining, classifier-free guidance, and data scaling. All resources, including our model weights, training scripts, and evaluation tools are made available for further research.
STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
[ "Shalev Lifshitz", "Keiran Paster", "Harris Chan", "Jimmy Ba", "Sheila A. McIlraith" ]
Workshop/FMDM
2306.00937
[ "" ]
https://huggingface.co/papers/2306.00937
3
9
1
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Pvjk9lxLJK
@inproceedings{ mao2023gptdriver, title={{GPT}-Driver: Learning to Drive with {GPT}}, author={Jiageng Mao and Yuxi Qian and Hang Zhao and Yue Wang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=Pvjk9lxLJK} }
We present a simple yet effective approach that can transform the OpenAI GPT-3.5 model into a reliable motion planner for autonomous vehicles. Motion planning is a core challenge in autonomous driving, aiming to plan a driving trajectory that is safe and comfortable. Existing motion planners predominantly leverage heuristic methods to forecast driving trajectories, yet these approaches demonstrate insufficient generalization capabilities in the face of novel and unseen driving scenarios. In this paper, we propose a novel approach to motion planning that capitalizes on the strong reasoning capabilities and generalization potential inherent to Large Language Models (LLMs). The fundamental insight of our approach is the reformulation of motion planning as a language modeling problem, a perspective not previously explored. Specifically, we represent the planner inputs and outputs as language tokens, and leverage the LLM to generate driving trajectories through a language description of coordinate positions. Furthermore, we propose a novel prompting-reasoning-finetuning strategy to stimulate the numerical reasoning potential of the LLM. With this strategy, the LLM can describe highly precise trajectory coordinates and also its internal decision-making process in natural language. We evaluate our approach on the large-scale nuScenes dataset, and extensive experiments substantiate the effectiveness, generalization ability, and interpretability of our GPT-based motion planner.
GPT-Driver: Learning to Drive with GPT
[ "Jiageng Mao", "Yuxi Qian", "Junjie Ye", "Hang Zhao", "Yue Wang" ]
Workshop/FMDM
2310.01415
[ "https://github.com/pointscoder/gpt-driver" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PMtZjDYB68
@inproceedings{ stechly2023gpt, title={{GPT}-4 Doesn{\textquoteright}t Know It{\textquoteright}s Wrong: An Analysis of Iterative Prompting for Reasoning Problems}, author={Kaya Stechly and Matthew Marquez and Subbarao Kambhampati}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=PMtZjDYB68} }
There has been considerable divergence of opinion on the reasoning abilities of Large Language Models (LLMs). While the initial optimism that reasoning might emerge automatically with scale has been tempered thanks to a slew of counterexamples--ranging from multiplication to simple planning, there is still the wide spread belief that LLMs can self-critique and improve their own solutions in an iterative fashion. This belief seemingly rests on the assumption that verification of correctness should be easier than generation--a rather classical argument from computational complexity, that should be irrelevant to LLMs to the extent what they are doing is approximate retrieval. In this paper, we set out to systematically investigate the effectiveness of iterative prompting of LLMs in the context of {\em Graph Coloring}, a canonical NP-complete reasoning problem that is related to propositional satisfiability as well as practical problems like scheduling and allocation. We present a principled empirical study of the performance of GPT4 in solving graph coloring instances or verifying the correctness of candidate colorings--both in direct and iterative modes. In iterative modes, we experiment both with the model critiquing its own answers and an external correct reasoner verifying proposed solutions. In both cases, we analyze whether the content of the criticisms actually affects bottom line performance. The study seems to indicate that (i) LLMs are bad at solving graph coloring instances (ii) they are no better at verifying a solution--and thus are not effective in iterative modes with LLMs critiquing LLM-generated solutions (iii) the correctness and content of the criticisms--whether by LLMs or external solvers--seems largely irrelevant to the performance of iterative prompting. We show that the observed effectiveness of LLMs in iterative settings is largely due to the correct solution being fortuitously present in the top-k completions of the prompt (and being recognized as such by an external verifier). Our results thus call into question claims about the self-critiquing capabilities of state of the art LLMs.
GPT-4 Doesn’t Know It’s Wrong: An Analysis of Iterative Prompting for Reasoning Problems
[ "Kaya Stechly", "Matthew Marquez", "Subbarao Kambhampati" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PJfc4x2jXY
@inproceedings{ feng2023alphazerolike, title={Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training}, author={Xidong Feng and Ziyu Wan and Muning Wen and Ying Wen and Weinan Zhang and Jun Wang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=PJfc4x2jXY} }
Large language models (LLMs) typically employ sampling or beam search, accompanied by prompts such as Chain-of-Thought (CoT), to boost reasoning and decoding ability. Recent work like Tree-of-Thought (ToT) and Reasoning via Planning (RAP) aim to augment the reasoning capabilities of LLMs by utilizing tree-search algorithms to guide multi-step reasoning. These methods mainly focus on LLMs' reasoning ability during inference and heavily rely on human-designed prompts to activate LLM as a value function, thus lacking general applicability and scalability. To address these limitations, we present an AlphaZero-like tree-search learning framework for LLMs (termed TS-LLM), systematically showing how tree-search with a learned value function can guide LLMs' decoding ability. TS-LLM distinguishes itself in two key ways: (1) Leveraging a learned value function, our approach can be generally applied to different tasks beyond reasoning (such as RLHF alignment), and LLMs of any size, without prompting advanced, large-scale models. (2) It can guide LLM's decoding during both inference and training. Empirical evaluations across reasoning, planning, and RLHF alignment tasks validate the effectiveness of TS-LLM, even on trees with a depth of 64.
Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training
[ "Xidong Feng", "Ziyu Wan", "Muning Wen", "Ying Wen", "Weinan Zhang", "Jun Wang" ]
Workshop/FMDM
2309.17179
[ "https://github.com/waterhorse1/llm_tree_search" ]
https://huggingface.co/papers/2309.17179
1
2
0
6
[ "OhCherryFire/llama2-7b-gsm8k-policy-hf", "OhCherryFire/llama2-7b-game24-sft-ep3-ct2", "OhCherryFire/llama2-7b-prontoqa-sft-ep1-ct2", "OhCherryFire/llama2-7b-game24-value", "OhCherryFire/llama2-7b-prontoqa-value", "OhCherryFire/llama2-7b-gsm8k-value", "OhCherryFire/llama2-7b-prontoqa-policy-hf", "OhCherryFire/llama2-7b-game24-policy-hf" ]
[]
[]
[ "OhCherryFire/llama2-7b-gsm8k-policy-hf", "OhCherryFire/llama2-7b-game24-sft-ep3-ct2", "OhCherryFire/llama2-7b-prontoqa-sft-ep1-ct2", "OhCherryFire/llama2-7b-game24-value", "OhCherryFire/llama2-7b-prontoqa-value", "OhCherryFire/llama2-7b-gsm8k-value", "OhCherryFire/llama2-7b-prontoqa-policy-hf", "OhCherryFire/llama2-7b-game24-policy-hf" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=P8E4Br72j3
@inproceedings{ wang2023voyager, title={Voyager: An Open-Ended Embodied Agent with Large Language Models}, author={Guanzhi Wang and Yuqi Xie and Yunfan Jiang and Ajay Mandlekar and Chaowei Xiao and Yuke Zhu and Linxi Fan and Anima Anandkumar}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=P8E4Br72j3} }
We introduce Voyager, the first LLM-powered embodied lifelong learning agent in an open-ended world that continuously explores, acquires diverse skills, and makes novel discoveries without human intervention in Minecraft. Voyager consists of three key components: 1) an automatic curriculum that maximizes exploration, 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent’s capability rapidly and alleviates catastrophic forgetting. Empirically, Voyager demonstrates strong in-context lifelong learning capabilities. It outperforms prior SOTA by obtaining 3.1x more unique items, unlocking tech tree milestones up to 15.3x faster, and traveling 2.3x longer distances. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize.
Voyager: An Open-Ended Embodied Agent with Large Language Models
[ "Guanzhi Wang", "Yuqi Xie", "Yunfan Jiang", "Ajay Mandlekar", "Chaowei Xiao", "Yuke Zhu", "Linxi Fan", "Anima Anandkumar" ]
Workshop/FMDM
2305.16291
[ "https://github.com/MineDojo/Voyager" ]
https://huggingface.co/papers/2305.16291
4
9
4
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=NxLZ1URck2
@inproceedings{ zhou2023double, title={Double Policy Estimation for Importance Sampling in Sequence Modeling-Based Reinforcement Learning}, author={Hanhan Zhou and Tian Lan and Vaneet Aggarwal}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=NxLZ1URck2} }
Offline reinforcement learning aims to utilize datasets of previously gathered environment-action interaction records to learn a policy without access to the real environment. Recent work has shown that offline reinforcement learning can be formulated as a sequence modeling problem and solved via supervised learning with approaches such as decision transformer. While these sequence-based methods achieve competitive results over return-to-go methods, especially on tasks that require longer episodes or with scarce rewards, importance sampling is not considered to correct the policy bias when dealing with off-policy data, mainly due to the absence of behavior policy and the use of deterministic evaluation policies. To this end, we propose an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation (DPE) in a unified framework with statistically proven properties on variance reduction. We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our method brings performance improvements on selected methods and outperforms state-of-the-art baselines in several tasks, demonstrating the advantages of enabling double policy estimation for sequence-modeled reinforcement learning.
Double Policy Estimation for Importance Sampling in Sequence Modeling-Based Reinforcement Learning
[ "Hanhan Zhou", "Tian Lan", "Vaneet Aggarwal" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=MUtbsFRZwI
@inproceedings{ gandhi2023strategic, title={Strategic Reasoning with Language Models}, author={Kanishk Gandhi and Dorsa Sadigh and Noah Goodman}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=MUtbsFRZwI} }
Strategic reasoning enables agents to cooperate, communicate, and compete with other agents in diverse situations. Existing approaches to solving strategic games rely on extensive training, yielding strategies that do not generalize to new scenarios or games without retraining. Large Language Models (LLMs), with their ability to comprehend and generate complex, context-rich language, could prove powerful as tools for strategic gameplay. This paper introduces an approach that uses pretrained LLMs with few-shot chain-of-thought examples to enable strategic reasoning for AI agents. Our approach uses systematically generated demonstrations of reasoning about states, values, and beliefs to prompt the model. Using extensive variations of simple matrix games, we show that strategies that are derived based on systematically generated prompts generalize almost perfectly to new game structures, alternate objectives, and hidden information. Additionally, we demonstrate our approach can lead to human-like negotiation strategies in realistic scenarios without any extra training or fine-tuning. Our results highlight the ability of LLMs, guided by systematic reasoning demonstrations, to adapt and excel in diverse strategic scenarios.
Strategic Reasoning with Language Models
[ "Kanishk Gandhi", "Dorsa Sadigh", "Noah Goodman" ]
Workshop/FMDM
2305.19165
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LjnW5oNHcD
@inproceedings{ baheri2023llmsaugmented, title={{LLM}s-augmented Contextual Bandit}, author={Ali Baheri and Cecilia Alm}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=LjnW5oNHcD} }
Contextual bandits have emerged as a cornerstone in reinforcement learning, enabling systems to make decisions with partial feedback. However, as contexts grow in complexity, traditional bandit algorithms can face challenges in adequately capturing and utilizing such contexts. In this paper, we propose a novel integration of large language models (LLMs) with the contextual bandit framework. By leveraging LLMs as an encoder, we enrich the representation of the context, providing the bandit with a denser and more informative view. Preliminary results on synthetic datasets demonstrate the potential of this approach, showing notable improvements in cumulative rewards and reductions in regret compared to traditional bandit algorithms. This integration not only showcases the capabilities of LLMs in reinforcement learning but also opens the door to a new era of contextually-aware decision systems.
LLMs-Augmented Contextual Bandit
[ "Ali Baheri", "Cecilia Alm" ]
Workshop/FMDM
2311.02268
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=L7vC3OYb2p
@inproceedings{ mousavi2023ncritics, title={N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics}, author={Sajad Mousavi and Ricardo Luna Gutierrez and Desik Rengarajan and Vineet Gundecha and Ashwin Ramesh Babu and Avisek Naug and Antonio Guillen and Soumyendu Sarkar}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=L7vC3OYb2p} }
We propose a self-correction mechanism for Large Language Models (LLMs) to mitigate issues such as toxicity and fact hallucination. This method involves refining model outputs through an ensemble of critics and the model's own feedback. Drawing inspiration from human behavior, we explore whether LLMs can emulate the self-correction process observed in humans who often engage in self-reflection and seek input from others to refine their understanding of complex topics. Our approach is model-agnostic and can be applied across various domains to enhance trustworthiness by addressing fairness, bias, and robustness concerns. We consistently observe performance improvements in LLMs for reducing toxicity and correcting factual errors.
N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics
[ "Sajad Mousavi", "Ricardo Luna Gutierrez", "Desik Rengarajan", "Vineet Gundecha", "Ashwin Ramesh Babu", "Avisek Naug", "Antonio Guillen", "Soumyendu Sarkar" ]
Workshop/FMDM
2310.18679
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=JfoPO121nT
@inproceedings{ huang2023voxposer, title={VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models}, author={Wenlong Huang and Chen Wang and Ruohan Zhang and Yunzhu Li and Jiajun Wu and Li Fei-Fei}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=JfoPO121nT} }
Large language models (LLMs) are shown to possess a wealth of actionable knowledge that can be extracted for robot manipulation in the form of reasoning and planning. Despite the progress, most still rely on pre-defined motion primitives to carry out the physical interactions with the environment, which remains a major bottleneck. In this work, we aim to synthesize robot trajectories, i.e., a dense sequence of 6-DoF end-effector waypoints, for a large variety of manipulation tasks given an open-set of instructions and an open-set of objects. We achieve this by first observing that LLMs excel at inferring affordances and constraints given a free-form language instruction. More importantly, by leveraging their code-writing capabilities, they can interact with a vision-language model (VLM) to compose 3D value maps to ground the knowledge into the observation space of the agent. The composed value maps are then used in a model-based planning framework to zero-shot synthesize closed-loop robot trajectories with robustness to dynamic perturbations. We further demonstrate how the proposed framework can benefit from online experiences by efficiently learning a dynamics model for scenes that involve contact-rich interactions. We present a large-scale study of the proposed method in both simulated and real-robot environments, showcasing the ability to perform a large variety of everyday manipulation tasks specified in free-form natural language.
VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models
[ "Wenlong Huang", "Chen Wang", "Ruohan Zhang", "Yunzhu Li", "Jiajun Wu", "Li Fei-Fei" ]
Workshop/FMDM
2307.05973
[ "https://github.com/huangwl18/voxposer" ]
https://huggingface.co/papers/2307.05973
3
3
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=JUwczEJY8I
@inproceedings{ rocamonde2023visionlanguage, title={Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning}, author={Juan Rocamonde and Victoriano Montesinos and Elvis Nava and Ethan Perez and David Lindner}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=JUwczEJY8I} }
Reinforcement learning (RL) requires either manually specifying a reward function, which is often infeasible, or learning a reward model from a large amount of human feedback, which is often very expensive. We study a more sampleefficient alternative: using pretrained vision-language models (VLMs) as zeroshot reward models (RMs) to specify tasks via natural language. We propose a natural and general approach to using VLMs as reward models, which we call VLM-RMs. We use VLM-RMs based on CLIP to train a MuJoCo humanoid to learn complex tasks without a manually specified reward function, such as kneeling, doing the splits, and sitting in a lotus position. For each of these tasks, we only provide a single sentence text prompt describing the desired task with minimal prompt engineering. We provide videos of the trained agents at: https://sites.google.com/view/vlm-rm. We can improve performance by providing a second “baseline” prompt and projecting out parts of the CLIP embedding space irrelevant to distinguish between goal and baseline. Further, we find a strong scaling effect for VLM-RMs: larger VLMs trained with more compute and data are better reward models. The failure modes of VLM-RMs we encountered are all related to known capability limitations of current VLMs, such as limited spatial reasoning ability or visually unrealistic environments that are far off-distribution for the VLM. We find that VLM-RMs are remarkably robust as long as the VLM is large enough. This suggests that future VLMs will become more and more useful reward models for a wide range of RL applications.
Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning
[ "Juan Rocamonde", "Victoriano Montesinos", "Elvis Nava", "Ethan Perez", "David Lindner" ]
Workshop/FMDM
2310.12921
[ "https://github.com/alignmentresearch/vlmrm" ]
https://huggingface.co/papers/2310.12921
4
19
1
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=JR1Z2GPNnE
@inproceedings{ sridhar2023goal, title={Goal Masked Diffusion Policies for Unified Navigation and Exploration}, author={Ajay Sridhar and Dhruv Shah and Catherine Glossop and Sergey Levine}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=JR1Z2GPNnE} }
Robotic learning for navigation in unfamiliar environments needs to provide policies for both task-oriented navigation (i.e., reaching a goal that the robot has located), and task-agnostic exploration (i.e., searching for a goal in a novel setting). Typically, these roles are handled by separate models, for example by using subgoal proposals, planning, or separate navigation strategies. In this paper, we describe how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration, with the latter providing the ability to search novel environments, and the former providing the ability to reach a user-specified goal once it has been located. We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments, as compared to approaches that use subgoal proposals from generative models, or prior methods based on latent variable models. We instantiate our method by using a large-scale Transformer-based policy trained on data from multiple ground robots, with a diffusion model decoder to flexibly handle both goal-conditioned and goal-agnostic navigation. Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods, and demonstrate significant improvements in performance and lower collision rates, despite utilizing smaller models than state-of-the-art approaches.
NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration
[ "Ajay Sridhar", "Dhruv Shah", "Catherine Glossop", "Sergey Levine" ]
Workshop/FMDM
2310.07896
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=JB3H6LS1PY
@inproceedings{ raparthy2023learning, title={Learning to Solve New sequential decision-making Tasks with In-Context Learning}, author={Sharath Chandra Raparthy and Eric Hambro and Robert Kirk and Mikael Henaff and Roberta Raileanu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=JB3H6LS1PY} }
Training autonomous agents that can generalize to new tasks from a small number of demonstrations is a long-standing problem in machine learning. Recently, transformers have displayed impressive few-shot learning capabilities on a wide range of domains in language and vision. However, the sequential dcision-making setting poses additional challenges and has a much lower tolerance for errors since the environment's stochasticity or the agent's wrong actions can lead to unseen (and sometimes unrecoverable) states. In this paper, we use an illustrative example to show that a naive approach to using transformers in sequential decision-making problems does not lead to few-shot learning. We then demonstrate how training on sequences of trajectories with certain distributional properties leads to few-shot learning in new sequential decision-making tasks. We investigate different design choices and find that larger model and dataset sizes, as well as more task diversity, environment stochasticity and trajectory burstiness, all result in better generalization to out-of-distribution tasks given just a few demonstrations per task. Leveraging these insights, we demonstrate our model's generalization to unseen MiniHack and Procgen tasks via in-context learning from just a handful of expert demonstrations per task.
Learning to Solve New sequential decision-making Tasks with In-Context Learning
[ "Sharath Chandra Raparthy", "Eric Hambro", "Robert Kirk", "Mikael Henaff", "Roberta Raileanu" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=I1y2bCibdQ
@inproceedings{ nguyen2023expt, title={Ex{PT}: Scaling Foundation Models for Experimental Design via Synthetic Pretraining}, author={Tung Nguyen and Sudhanshu Agrawal and Aditya Grover}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=I1y2bCibdQ} }
Experimental design is a fundamental problem in many science and engineering fields. In this problem, sample efficiency is crucial due to the time, money, and safety costs of real-world design evaluations. Existing approaches either rely on active data collection or access to large, labeled datasets of past experiments, making them impractical in many real-world scenarios. In this work, we address the more challenging yet realistic setting of few-shot experimental design, where only a few labeled data points of input designs and their corresponding values are available. We approach this problem as a conditional generation task, where a model conditions on a few labeled examples and the desired output to generate an optimal input design. To this end, we present Pretrained Transformers for Experimental Design (ExPT), which uses a novel combination of synthetic pretraining with in-context learning to enable few-shot generalization. In ExPT, we only assume knowledge of a finite collection of unlabelled data points from the input domain and pretrain a transformer neural network to optimize diverse synthetic functions defined over this domain. Unsupervised pretraining allows ExPT to adapt to any design task at test time in an in-context fashion by conditioning on a few labeled data points from the target task and generating the candidate optima. We evaluate ExPT on few-shot experimental design in challenging domains and demonstrate its superior generality and performance compared to existing methods.
ExPT: Synthetic Pretraining for Few-Shot Experimental Design
[ "Tung Nguyen", "Sudhanshu Agrawal", "Aditya Grover" ]
Workshop/FMDM
2310.19961
[ "https://github.com/tung-nd/expt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HtbMMAoxre
@inproceedings{ rahman2023natural, title={Natural Language-based State Representation in Deep Reinforcement Learning}, author={Md Masudur Rahman and Yexiang Xue}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=HtbMMAoxre} }
This study investigates the potential of using natural language descriptions as an alternative to direct image-based observations for learning policies in reinforcement learning. Due to the inherent challenges in managing image-based observations, which include abundant information and irrelevant features, we propose a method that compresses images into a natural language form for state representation. This approach allows better interpretability and leverages the processing capabilities of large language models (LLMs). We conducted several experiments involving tasks that required image-based observation. The results demonstrated that policies trained using natural language descriptions of images yield better generalization than those trained directly from images, emphasizing the potential of this approach in practical settings.
Natural Language-based State Representation in Deep Reinforcement Learning
[ "Md Masudur Rahman", "Yexiang Xue" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HXWpdyTUsL
@inproceedings{ hu2023avis, title={{AVIS}: Autonomous Visual Information Seeking with Large Language Model Agent}, author={Ziniu Hu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=HXWpdyTUsL} }
In this paper, we propose an autonomous information seeking visual question answering framework, AVIS. Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools and to investigate their outputs, thereby acquiring the indispensable knowledge needed to provide answers to the posed questions. Responding to visual questions that necessitate external knowledge, such as "What event is commemorated by the building depicted in this image?", is a complex task. This task presents a combinatorial search space that demands a sequence of actions, including invoking APIs, analyzing their responses, and making informed decisions. We conduct a user study to collect a variety of instances of human decision-making when faced with this task. This data is then used to design a system comprised of three components: an LLM-powered planner that dynamically determines which tool to use next, an LLM-powered reasoner that analyzes and extracts key information from the tool outputs, and a working memory component that retains the acquired information throughout the process. The collected user behavior serves as a guide for our system in two key ways. First, we create a transition graph by analyzing the sequence of decisions made by users. This graph delineates distinct states and confines the set of actions available at each state. Second, we use examples of user decision-making to provide our LLM-powered planner and reasoner with relevant contextual instances, enhancing their capacity to make informed decisions. We show that AVIS achieves state-of-the-art results on knowledge-based visual question answering benchmarks such as Infoseek and OK-VQA.
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent
[ "Ziniu Hu" ]
Workshop/FMDM
2306.08129
[ "" ]
https://huggingface.co/papers/2306.08129
3
5
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=HO7x5jrVPt
@inproceedings{ zhang2023enhancing, title={Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting}, author={Xinlu Zhang and Shiyang Li and Xianjun Yang and Chenxin Tian and Yao Qin and Linda Petzold}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=HO7x5jrVPt} }
Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under $privacy-restricted$ scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57\% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability.
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
[ "Xinlu Zhang", "Shiyang Li", "Xianjun Yang", "Chenxin Tian", "Yao Qin", "Linda Ruth Petzold" ]
Workshop/FMDM
2305.12723
[ "https://github.com/xzhang97666/privacyboost-slm" ]
https://huggingface.co/papers/2305.12723
2
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Gv04zPxvCq
@inproceedings{ prakash2023llm, title={{LLM} Augmented Hierarchical Agents}, author={Bharat Prakash and Tim Oates and Tinoosh Mohsenin}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=Gv04zPxvCq} }
Solving long horizon temporally extended tasks using Reinforcement Learning (RL) is extremely challenging, compounded by the common practice of learning without prior knowledge (or tabula rasa learning). Humans can generate and execute plans with temporally extended actions and learn to perform new tasks because we almost never solve problems from scratch. We want autonomous agents to have the same capabilities. Recently, LLMs have shown to encode tremendous amount of knowledge about the world and impressive in-context learning and reasoning capabilities. However, using LLMs to solve real world tasks is challenging as these models are not grounded in the current task. We want to leverage the planning capabilities of LLMs while using RL to provide the essential environment interaction. In this paper, we present a hierarchical agent which uses LLMs to solve long horizon tasks. Instead of completely relying on LLMs, we use them to guide the high-level policy making them significantly more sample efficient. We evaluate our method on simulation environments such as MiniGrid, SkillHack, Crafter and on a real robot arm in block manipulation tasks. We show that agents trained using our method outperform other baselines methods and once trained, they don't depend on LLMs during deployment.
LLM Augmented Hierarchical Agents
[ "Bharat Prakash", "Tim Oates", "Tinoosh Mohsenin" ]
Workshop/FMDM
2311.05596
[ "" ]
https://huggingface.co/papers/2311.05596
0
1
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=GrkgKtOjaH
@inproceedings{ ruan2023tptu, title={{TPTU}: Task Planning and Tool Usage of Large Language Model-based {AI} Agents}, author={Jingqing Ruan and YiHong Chen and Bin Zhang and Zhiwei Xu and Tianpeng Bao and du qing and shi shiwei and Hangyu Mao and Xingyu Zeng and Rui Zhao}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=GrkgKtOjaH} }
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents
[ "Jingqing Ruan", "YiHong Chen", "Bin Zhang", "Zhiwei Xu", "Tianpeng Bao", "du guo qing", "shi shiwei", "Hangyu Mao", "Ziyue Li", "Xingyu Zeng", "Rui Zhao" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GYTxBuYMfC
@inproceedings{ yu2023mathcalbcoder, title={\${\textbackslash}mathcal\{B\}\$-Coder: On Value-Based Deep Reinforcement Learning for Program Synthesis}, author={Zishun Yu and Yunzhe Tao and Liyu Chen and Tao Sun and Hongxia Yang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=GYTxBuYMfC} }
Program synthesis aims to create accurate, executable code from natural language descriptions. This field has leveraged the power of reinforcement learning (RL) in conjunction with large language models (LLMs), significantly enhancing code generation capabilities. This integration focuses on directly optimizing functional correctness, transcending conventional supervised losses. While current literature predominantly favors policy-based algorithms, attributes of program synthesis suggest a natural compatibility with value-based methods. This stems from rich collection of off-policy programs developed by human programmers, and the straightforward verification of generated programs through automated unit testing (i.e. easily obtainable rewards in RL language). Diverging from the predominant use of policy-based algorithms, our work explores the applicability of value-based approaches, leading to the development of our $\mathcal{B}$-Coder (pronounced Bellman coder). Yet, training value-based methods presents challenges due to the enormous search space inherent to program synthesis. To this end, we propose an initialization protocol for RL agents utilizing pre-trained LMs and a conservative Bellman operator to reduce training complexities. Moreover, we demonstrate how to leverage the learned value functions as a dual strategy to post-process generated programs. Our empirical evaluations demonstrated $\mathcal{B}$-Coder's capability in achieving state-of-the-art performance compared with policy-based methods. Remarkably, this achievement is reached with minimal reward engineering effort, highlighting the effectiveness of value-based RL, independent of reward designs.
ℬ-Coder: On Value-Based Deep Reinforcement Learning for Program Synthesis
[ "Zishun Yu", "Yunzhe Tao", "Liyu Chen", "Tao Sun", "Hongxia Yang" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=FzpfPa6unv
@inproceedings{ hansen2023tdmpc, title={{TD}-{MPC}2: Scalable, Robust World Models for Continuous Control}, author={Nicklas Hansen and Hao Su and Xiaolong Wang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=FzpfPa6unv} }
TD-MPC is a model-based reinforcement learning (RL) algorithm that performs local trajectory optimization in the latent space of a learned implicit (decoder-free) world model. In this work, we present TD-MPC2: a series of improvements upon the TD-MPC algorithm. We demonstrate that TD-MPC2 improves significantly over baselines across 104 online RL tasks spanning 4 diverse task domains, achieving consistently strong results with a single set of hyperparameters. We further show that agent capabilities increase with model and data size, and successfully train a single 317M parameter agent to perform 80 tasks across multiple task domains, embodiments, and action spaces. We conclude with an account of lessons, opportunities, and risks associated with large TD-MPC2 agents. Explore videos, models, data, code, and more at https://nicklashansen.github.io/td-mpc2
TD-MPC2: Scalable, Robust World Models for Continuous Control
[ "Nicklas Hansen", "Hao Su", "Xiaolong Wang" ]
Workshop/FMDM
2310.16828
[ "" ]
https://huggingface.co/papers/2310.16828
1
7
0
3
[ "nicklashansen/tdmpc2" ]
[ "nicklashansen/tdmpc2" ]
[]
[ "nicklashansen/tdmpc2" ]
[ "nicklashansen/tdmpc2" ]
[]
1
poster
null
https://openreview.net/forum?id=FtgaSZKFti
@inproceedings{ miao2023scaling, title={Scaling Offline Q-Learning with Vision Transformers}, author={Yingjie Miao and Jordi Orbay and Rishabh Agarwal and Aviral Kumar and George Tucker and Aleksandra Faust}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=FtgaSZKFti} }
It has been shown that offline RL methods, such as conservative Q-learning~(CQL), scale favorably for training generalist agents with a ResNet backbone. Recent vision and natural language processing research shows that transformer-based models scale more favorably compared to domain specific models with strong inductive biases (such as convolutional neural networks and recurrent neural networks). In this paper, we investigate how well visual transformers (ViTs) serve as backbones for CQL for training single-game agents. In this work, we enhance the Vision Transformer (ViT) for image-based RL by introducing spatio-temporal attention layers. We further investigate the impact of various embedding sequence aggregation methods on ViT performance. Overall, our modified ViT outperforms the standard ViTs in the single-game Atari setting.
Scaling Offline Q-Learning with Vision Transformers
[ "Yingjie Miao", "Jordi Orbay", "Rishabh Agarwal", "Aviral Kumar", "George Tucker", "Aleksandra Faust" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=FhmX2q2MP8
@inproceedings{ ruan2023identifying, title={Identifying the Risks of {LM} Agents with an {LM}-Emulated Sandbox}, author={Yangjun Ruan and Honghua Dong and Andrew Wang and Silviu Pitis and Yongchao Zhou and Jimmy Ba and Yann Dubois and Chris Maddison and Tatsunori Hashimoto}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=FhmX2q2MP8} }
Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks—such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, setting up the environment for each test scenario manually, and finding risky cases. As tools and agents become more complex, the high cost of testing these agents will make it increasingly difficult to find high-stakes, long-tail risks. To address these challenges, we introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables scalable testing of LM agents against a diverse range of tools and scenarios. Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks. We test both the tool emulator and evaluator through human evaluation and find that 68.8% of failures identified with ToolEmu would be valid real-world agent failures. Using our curated initial benchmark consisting of 36 high-stakes toolkits and 144 test cases, we provide a quantitative risk analysis of current LM agents and identify numerous failures with potentially severe outcomes. Notably, even the safest LM agent exhibits such failures 23.9% of the time according to our evaluator, underscoring the need to develop safer LM agents for real-world deployment.
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
[ "Yangjun Ruan", "Honghua Dong", "Andrew Wang", "Silviu Pitis", "Yongchao Zhou", "Jimmy Ba", "Yann Dubois", "Chris J. Maddison", "Tatsunori Hashimoto" ]
Workshop/FMDM
2309.15817
[ "https://github.com/ryoungj/toolemu" ]
https://huggingface.co/papers/2309.15817
1
0
0
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=FUdZ6HEOre
@inproceedings{ zhang2023using, title={Using Large Language Models for Hyperparameter Optimization}, author={Michael Zhang and Nishkrit Desai and Juhan Bae and Jonathan Lorraine and Jimmy Ba}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=FUdZ6HEOre} }
This paper studies using foundational large language models (LLMs) to make decisions during hyperparameter optimization (HPO). Empirical evaluations demonstrate that in settings with constrained search budgets, LLMs can perform comparably or better than traditional HPO methods like random search and Bayesian optimization on standard benchmarks. Furthermore, we propose to treat the code specifying our model as a hyperparameter, which the LLM outputs, going beyond the capabilities of existing HPO approaches. Our findings suggest that LLMs are a promising tool for improving efficiency in the traditional decision-making problem of hyperparameter optimization.
Using Large Language Models for Hyperparameter Optimization
[ "Michael R. Zhang", "Nishkrit Desai", "Juhan Bae", "Jonathan Lorraine", "Jimmy Ba" ]
Workshop/FMDM
2312.04528
[ "https://github.com/michaelrzhang/llm-hyperopt" ]
https://huggingface.co/papers/2312.04528
1
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=FDeV7BOcob
@inproceedings{ zhao2023large, title={Large Language Models as Commonsense Knowledge for Large-Scale Task Planning}, author={Zirui Zhao and Wee Sun Lee and David Hsu}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=FDeV7BOcob} }
Large-scale task planning is a major challenge. Recent work exploits large language models (LLMs) directly as a policy and shows surprisingly interesting results. This paper shows that LLMs provide a commonsense model of the world in addition to a policy that acts on it. The world model and the policy can be combined in a search algorithm, such as Monte Carlo Tree Search (MCTS), to scale up task planning. In our new LLM-MCTS algorithm, the LLM-induced world model provides a commonsense prior belief for MCTS to achieve effective reasoning; the LLM-induced policy acts as a heuristic to guide the search, vastly improving search efficiency. Experiments show that LLM-MCTS outperforms both MCTS alone and policies induced by LLMs (GPT2 and GPT3.5) by a wide margin, for complex, novel tasks. Further experiments and analyses on multiple tasks---multiplication, multi-hop travel planning, object rearrangement---suggest minimum description length (MDL) as a general guiding principle: if the description length of the world model is substantially smaller than that of the policy, using LLM as a world model for model-based planning is likely better than using LLM solely as a policy.
Large Language Models as Commonsense Knowledge for Large-Scale Task Planning
[ "Zirui Zhao", "Wee Sun Lee", "David Hsu" ]
Workshop/FMDM
2305.14078
[ "" ]
https://huggingface.co/papers/2305.14078
0
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=DpUEkGZ6wl
@inproceedings{ aouali2023linear, title={Linear diffusion models meet contextual bandits with large action spaces}, author={Imad Aouali}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=DpUEkGZ6wl} }
Efficient exploration is a key challenge in contextual bandits due to the potentially large size of their action space, where uninformed exploration can result in computational and statistical inefficiencies. Fortunately, the rewards of actions are often correlated and this can be leveraged to explore them efficiently. In this work, we capture such correlations using pre-trained linear diffusion models; upon which we design diffusion Thompson sampling (dTS). Both theoretical and algorithmic foundations are developed for dTS, and empirical evaluation also shows its favorable performance.
Linear diffusion models meet contextual bandits with large action spaces
[ "Imad Aouali" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=CvKhipf77V
@inproceedings{ gao2023policygradient, title={Policy-Gradient Training of Language Models for Ranking}, author={Ge Gao and Jonathan Chang and Claire Cardie and Kiant{\'e} Brantley and Thorsten Joachims}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=CvKhipf77V} }
Text retrieval plays a crucial role in incorporating factual knowledge for decision making into language processing pipelines, ranging from chat-based web search to question answering systems. Current state-of-the-art text retrieval models leverage pre-trained large language models (LLMs) to achieve competitive performance, but training LLM-based retrievers via typical contrastive losses requires intricate heuristics, including selecting hard negatives and using additional supervision as learning signals. This reliance on heuristics stems from the fact that the contrastive loss itself is heuristic and does not directly optimize the downstream metrics of decision quality at the end of the processing pipeline. To address this issue, we introduce Neural PG-RANK, a novel training algorithm that learns to rank by instantiating a LLM as a Plackett-Luce ranking policy. Neural PG-RANK provides a principled method for end-to-end training of retrieval models as part of larger decision systems via policy gradient, with little reliance on complex heuristics, and it effectively unifies the training objective with downstream decision-making quality. We conduct extensive experiments on various text retrieval benchmarks. The results demonstrate that when the training objective aligns with the evaluation setup, Neural PG-RANK yields remarkable in-domain performance improvement, with substantial out-of-domain generalization to some critical datasets employed in downstream question answering tasks.
Policy-Gradient Training of Language Models for Ranking
[ "Ge Gao", "Jonathan Daniel Chang", "Claire Cardie", "Kianté Brantley", "Thorsten Joachims" ]
Workshop/FMDM
2310.04407
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=CqMhoTVgHX
@inproceedings{ lin2023transformers, title={Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining}, author={Licong Lin and Yu Bai and Song Mei}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=CqMhoTVgHX} }
Large transformer models pretrained on offline reinforcement learning datasets have demonstrated remarkable in-context reinforcement learning (ICRL) capabilities, where they can make good decisions when prompted with interaction trajectories from unseen environments. However, when and how transformers can be trained to perform ICRL have not been theoretically well-understood. In particular, it is unclear which reinforcement-learning algorithms transformers can perform in context, and how distribution mismatch in offline training data affects the learned algorithms. This paper provides a theoretical framework that analyzes supervised pretraining for ICRL. This includes two recently proposed training methods --- algorithm distillation and decision-pretrained transformers. First, assuming model realizability, we prove the supervised-pretrained transformer will imitate the conditional expectation of the expert algorithm given the observed trajectory. The generalization error will scale with model capacity and a distribution divergence factor between the expert and offline algorithms. Second, we show transformers with ReLU attention can efficiently approximate near-optimal online reinforcement learning algorithms like LinUCB and Thompson sampling for stochastic linear bandits, and UCB-VI for tabular Markov decision processes. This provides the first quantitative analysis of the ICRL capabilities of transformers pretrained from offline trajectories.
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining
[ "Licong Lin", "Yu Bai", "Song Mei" ]
Workshop/FMDM
2310.08566
[ "https://github.com/licong-lin/in-context-rl" ]
https://huggingface.co/papers/2310.08566
2
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=AVg8WnI5ba
@inproceedings{ chen2023visionlanguage, title={Vision-Language Models Provide Promptable Representations for Reinforcement Learning}, author={William Chen and Oier Mees and Aviral Kumar and Sergey Levine}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=AVg8WnI5ba} }
Intelligent beings have the ability to quickly learn new behaviors and tasks by leveraging background world knowledge. This stands in contrast to most agents trained with reinforcement learning (RL), which typically learn behaviors from scratch. Therefore, we would like to endow RL agents with a similar ability to leverage contextual prior information. To this end, we propose a novel approach that uses the vast amounts of general-purpose, diverse, and indexable world knowledge encoded in vision-language models (VLMs) pre-trained on Internet-scale data to generate text in response to images and prompts. We initialize RL policies with VLMs by using such models as sources of \textit{promptable representations}: embeddings that are grounded in visual observations and encode semantic features based on the VLM's internal knowledge, as elicited through prompts that provide task context and auxiliary information. We evaluate our approach on visually-complex RL tasks in Minecraft. We find that policies trained on promptable embeddings significantly outperform equivalent policies trained on generic, non-promptable image encoder features. Moreover, we show that promptable representations extracted from general-purpose VLMs outperform both domain-specific representations and instruction-following methods. In ablations, we find that VLM promptability and text generation both are important in yielding good representations for RL. Finally, we give a simple method for evaluating and optimizing prompts used by our approach for a given task without running expensive RL trials, ensuring that it extracts task-relevant semantic features from the VLM.
Vision-Language Models Provide Promptable Representations for Reinforcement Learning
[ "William Chen", "Oier Mees", "Aviral Kumar", "Sergey Levine" ]
Workshop/FMDM
2402.02651
[ "" ]
https://huggingface.co/papers/2402.02651
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=8v8AVAo6E5
@inproceedings{ klissarov2023motif, title={Motif: Intrinsic Motivation from Artificial Intelligence Feedback}, author={Martin Klissarov and Pierluca D'Oro and Shagun Sodhani and Roberta Raileanu and Pierre-Luc Bacon and Pascal Vincent and Amy Zhang and Mikael Henaff}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=8v8AVAo6E5} }
Exploring rich environments and evaluating one's actions without prior knowledge is immensely challenging. In this paper, we propose Motif, a general method to interface such prior knowledge from a Large Language Model (LLM) with an agent. Motif is based on the idea of grounding LLMs for decision-making without requiring them to interact with the environment: it elicits preferences from an LLM over pairs of captions to construct an intrinsic reward, which is then used to train agents with reinforcement learning. We evaluate Motif's performance and behavior on the challenging, open-ended and procedurally-generated NetHack game. Surprisingly, by only learning to maximize its intrinsic reward, Motif achieves a higher game score than an algorithm directly trained to maximize the score itself. When combining Motif's intrinsic reward with the environment reward, our method significantly outperforms existing approaches and makes progress on tasks where no advancements have ever been made without demonstrations. Finally, we show that Motif mostly generates intuitive human-aligned behaviors which can be steered easily through prompt modifications, while scaling well with the LLM size and the amount of information given in the prompt.
Motif: Intrinsic Motivation from Artificial Intelligence Feedback
[ "Martin Klissarov", "Pierluca D'Oro", "Shagun Sodhani", "Roberta Raileanu", "Pierre-Luc Bacon", "Pascal Vincent", "Amy Zhang", "Mikael Henaff" ]
Workshop/FMDM
2310.00166
[ "https://github.com/facebookresearch/motif" ]
https://huggingface.co/papers/2310.00166
0
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=7RMTrQGrXS
@inproceedings{ ma2023eureka, title={Eureka: Human-Level Reward Design via Coding Large Language Models}, author={Yecheng Jason Ma and William Liang and Guanzhi Wang and De-An Huang and Osbert Bastani and Dinesh Jayaraman and Yuke Zhu and Linxi Fan and Anima Anandkumar}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=7RMTrQGrXS} }
Large Language Models (LLMs) have excelled as high-level semantic planners for sequential decision-making tasks. However, harnessing them to learn complex low-level manipulation tasks, such as dexterous pen spinning, remains an open problem. We bridge this fundamental gap and present Eureka, a human-level reward design algorithm powered by LLMs. Eureka exploits the remarkable zero-shot generation, code-writing, and in-context improvement capabilities of state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over reward code. The resulting rewards can then be used to acquire complex skills via reinforcement learning. Without any task-specific prompting or pre-defined reward templates, Eureka generates reward functions that outperform expert human-engineered rewards. In a diverse suite of 29 open-source RL environments that include 10 distinct robot morphologies, Eureka outperforms human experts on 83% of the tasks, leading to an average normalized improvement of 52%. The generality of Eureka also enables a new gradient-free in-context learning approach to reinforcement learning from human feedback (RLHF), readily incorporating human inputs to improve the quality and the safety of the generated rewards without model updating. Finally, using Eureka rewards in a curriculum learning setting, we demonstrate for the first time, a simulated Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a pen in circles at rapid speed.
Eureka: Human-Level Reward Design via Coding Large Language Models
[ "Yecheng Jason Ma", "William Liang", "Guanzhi Wang", "De-An Huang", "Osbert Bastani", "Dinesh Jayaraman", "Yuke Zhu", "Linxi Fan", "Anima Anandkumar" ]
Workshop/FMDM
2310.12931
[ "https://github.com/eureka-research/Eureka" ]
https://huggingface.co/papers/2310.12931
5
26
3
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=5wiFcIvy84
@inproceedings{ sheikh2023language, title={Language Conditioned Semantic Search Based Policy for Robotic Manipulation Tasks}, author={Jannik Sheikh and Andrew Melnik and G Nandi and Robert Haschke}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=5wiFcIvy84} }
Reinforcement learning and Imitation Learning approaches utilize policy learning strategies that are difficult to generalize well with just a few examples of a task. In this work, we propose a language-conditioned semantic search-based method to produce an online search-based policy from the available demonstration dataset of state-action trajectories. Here we directly acquire actions from the most similar manipulation trajectories found in the dataset. Our approach surpasses the performance of the baselines on the CALVIN benchmark and exhibits strong zero-shot adaptation capabilities. This holds great potential for expanding the use of our online search-based policy approach to tasks typically addressed by Imitation Learning or Reinforcement Learning-based policies.
Language-Conditioned Semantic Search-Based Policy for Robotic Manipulation Tasks
[ "Jannik Sheikh", "Andrew Melnik", "Gora Chand Nandi", "Robert Haschke" ]
Workshop/FMDM
2312.05925
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5lcPe6DqfI
@inproceedings{ srinivasan2023nexusraven, title={NexusRaven: A Commercially-Permissive Language Model for Function Calling}, author={Venkat Krishna Srinivasan and Zhen Dong and Banghua Zhu and Brian Yu and Hanzi Mao and Damon Mosk-Aoyama and Kurt Keutzer and Jiantao Jiao and Jian Zhang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=5lcPe6DqfI} }
The rise of open-source, commercially permissive large language models (LLMs) is revolutionizing generative AI, presenting organizations with enhanced control, minimized data risks, and cost benefits compared to proprietary models. However, in the field of tool use and function-calling LLMs, many open-source models, such as Gorilla and ToolLLAMA, are dependent on proprietary LLMs like GPT-4 for high-quality training data, which often faces legal restrictions for competitive commercial applications. In this paper, we introduce NexusRaven-13B, an open-source LLM designed for function calls. Originating from the CodeLLAMA-13B lineage, NexusRaven-13B employs a unique data curation via multi-step refinement, ensuring high-quality training data without relying on GPT-4 distillation. NexusRaven-13B matches GPT-3.5 in zero-shot function-calling accuracy. When combined with our second core technique, demonstration retrieval augmentation, its performance significantly surpasses GPT-4. The code, model, and demo will be available after the review process.
NexusRaven: A Commercially-Permissive Language Model for Function Calling
[ "Venkat Krishna Srinivasan", "Zhen Dong", "Banghua Zhu", "Brian Yu", "Damon Mosk-Aoyama", "Kurt Keutzer", "Jiantao Jiao", "Jian Zhang" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5TIdOk7XQ6
@inproceedings{ yocum2023mitigating, title={Mitigating Generative Agent Social Dilemmas}, author={Julian Yocum and Phillip Christoffersen and Mehul Damani and Justin Svegliato and Dylan Hadfield-Menell and Stuart Russell}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=5TIdOk7XQ6} }
In social dilemmas, individuals would be better off cooperating but fail to do so due to conflicting interests that discourage cooperation. Existing work on social dilemmas in AI has focused on standard agent design paradigms, most recently in the context of multi-agent reinforcement learning (MARL). However, with the rise of large language models (LLMs), a new design paradigm for AI systems has started to emerge---generative agents, in which actions performed by agents are chosen by prompting LLMs. This paradigm has seen recent success, such as Voyager, a highly capable Minecraft agent. In this work, we perform an initial study of outcomes that arise when deploying generative agents in social dilemmas. To do this, we build a multi-agent Voyager framework with a contracting and judgement mechanism based on formal contracting, which has been effective in mitigating social dilemmas in MARL. We then construct social dilemmas in Minecraft as the testbed for our open-source framework. Finally, we conduct preliminary experiments using our framework to provide evidence that contracting helps improve outcomes for generative agents in social dilemmas.
Mitigating Generative Agent Social Dilemmas
[ "Julian Yocum", "Phillip J.K. Christoffersen", "Mehul Damani", "Justin Svegliato", "Dylan Hadfield-Menell", "Stuart Russell" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5IH0pideQK
@inproceedings{ zhang2023incontext, title={In-Context Multi-Armed Bandits via Supervised Pretraining}, author={Fred Zhang and Jiaxin Ye and Zhuoran Yang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=5IH0pideQK} }
Exploring the in-context learning capabilities of large transformer models, this research focuses on decision-making within reinforcement learning (RL) environments, specifically multi-armed bandit problems. We introduce the Reward Weighted Decision-Pretrained Transformer (DPT-RW), a model that uses straightforward supervised pretraining with a reward-weighted imitation learning loss. The DPT-RW predicts optimal actions by evaluating a query state and an in-context dataset across varied tasks. Surprisingly, this simple approach produces a model capable of solving a wide range of RL problems in-context, demonstrating online exploration and offline conservatism without specific training in these areas. A standout observation is the optimal performance of the model in the online setting, despite being trained on data generated from suboptimal policies and not having access to optimal data.
In-Context Multi-Armed Bandits via Supervised Pretraining
[ "Fred Weiying Zhang", "Jiaxin Ye", "Zhuoran Yang" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4Qd2tuJ5h6
@inproceedings{ cai2023groot, title={{GROOT}: Learning to Follow Instructions by Watching Gameplay Videos}, author={Shaofei Cai and Bowei Zhang and Zihao Wang and Xiaojian Ma and Anji Liu and Yitao Liang}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=4Qd2tuJ5h6} }
We study the problem of building a controller that can follow open-ended instructions in open-world environments. We propose to follow reference videos as instructions, which offer expressive goal specifications while eliminating the need for expensive text-gameplay annotations. A new learning framework is derived to allow learning such instruction-following controllers from gameplay videos while producing a video instruction encoder that induces a structured goal space. We implement our agent GROOT in a simple yet effective encoder-decoder architecture based on causal transformers. We evaluate GROOT against open-world counterparts and human players on a proposed Minecraft SkillForge benchmark. The Elo ratings clearly show that GROOT is closing the human-machine gap as well as exhibiting a 70% winning rate over the best generalist agent baseline. Qualitative analysis of the induced goal space further demonstrates some interesting emergent properties, including the goal composition and complex gameplay behavior synthesis.
GROOT: Learning to Follow Instructions by Watching Gameplay Videos
[ "Shaofei Cai", "Bowei Zhang", "Zihao Wang", "Xiaojian Ma", "Anji Liu", "Yitao Liang" ]
Workshop/FMDM
2310.08235
[ "" ]
https://huggingface.co/papers/2310.08235
3
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=2SjoG6lVz3
@inproceedings{ piriyakulkij2023asking, title={Asking Clarifying Questions using Language Models and Probabilistic Reasoning}, author={Top Piriyakulkij and Volodymyr Kuleshov and Kevin Ellis}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=2SjoG6lVz3} }
Actively inferring user preferences, for example by asking good questions, is important for any human-facing decision-making system. Active inference allows such systems to adapt and personalize themselves to nuanced individual preferences. To enable this ability for instruction-tuned large language models (LLMs), one may prompt them to ask users questions to infer their preferences, transforming the language models into more robust, interactive systems. However, out of the box, these models are not efficient at extracting preferences: the questions they generate are not informative, requiring a high number of user interactions and impeding the usability of the downstream system. In this work, we introduce an inference-time algorithm that helps LLMs quickly infer preferences by using more informative questions. Our algorithm uses a probabilistic model whose conditional distributions are defined by prompting an LLM, and returns questions that optimize expected entropy and expected model change. Results in a simplified interactive web shopping setting with real product items show that an LLM equipped with our entropy reduction algorithm outperforms baselines with the same underlying LLM on task performance while using fewer user interactions.
Active Preference Inference using Language Models and Probabilistic Reasoning
[ "Top Piriyakulkij", "Volodymyr Kuleshov", "Kevin Ellis" ]
Workshop/FMDM
2312.12009
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2KY3WwgcTi
@inproceedings{ pan2023pretraining, title={Pre-Training and Fine-Tuning Generative Flow Networks}, author={Ling Pan and Moksh Jain and Kanika Madan and Yoshua Bengio}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=2KY3WwgcTi} }
Generative Flow Networks (GFlowNets) are amortized samplers that learn stochastic policies to sequentially generate compositional objects from a given unnormalized reward distribution. They can generate diverse sets of high-reward objects, which is an important consideration in scientific discovery tasks. However, as they are typically trained from a given extrinsic reward function, it remains an important open challenge about how to leverage the power of pre-training and train GFlowNets in an unsupervised fashion for efficient adaptation to downstream tasks. Inspired by recent successes of unsupervised pre-training in various domains, we introduce a novel approach for reward-free pre-training of GFlowNets. By framing the training as a self-supervised problem, we propose an outcome-conditioned GFlowNet (OC-GFN) that learns to explore the candidate space. Specifically, OC-GFN learns to reach any targeted outcomes, akin to goal-conditioned policies in reinforcement learning. We show that the pre-trained OC-GFN model can allow for a direct extraction of a policy capable of sampling from any new reward functions in downstream tasks. Nonetheless, adapting OC-GFN on a downstream task-specific reward involves an intractable marginalization over possible outcomes. We propose a novel way to approximate this marginalization by learning an amortized predictor enabling efficient fine-tuning. Extensive experimental results validate the efficacy of our approach, demonstrating the effectiveness of pre-training the OC-GFN, and its ability to swiftly adapt to downstream tasks and discover modes more efficiently. This work may serve as a foundation for further exploration of pre-training strategies in the context of GFlowNets.
Pre-Training and Fine-Tuning Generative Flow Networks
[ "Ling Pan", "Moksh Jain", "Kanika Madan", "Yoshua Bengio" ]
Workshop/FMDM
2310.03419
[ "" ]
https://huggingface.co/papers/2310.03419
1
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=1SJZVCahQW
@inproceedings{ ghugare2023closing, title={Closing the Gap between {TD} Learning and Supervised Learning -- A Generalisation Point of View.}, author={Raj Ghugare and Matthieu Geist and Glen Berseth and Benjamin Eysenbach}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=1SJZVCahQW} }
Recent years have seen a drastic shift of focus on large models trained with simple self-supervised objectives on diverse datasets. These foundational models have become ubiquitous in NLP and vision because they are generally applicable to many downstream tasks. These success stories have sparked various attempts to train similar models for RL problems. Decision Transformers (DT) is one such popular approach that treats the RL problem as a sequence modeling problem, and uses a transformer model for predicting actions. This algorithmic choice, though simple, can have certain limitations when compared to traditional RL algorithms. In this paper, we study one such limitation -- the capability of recombining together pieces of previously seen experience to solve a task never seen before during training. This paper studies this question in the setting of goal-reaching problems. We formalize this desirable property as a form of \emph{stitching} generalization: after training on a distribution of (state, goal) pairs, one would like to evaluate on (state, goal) pairs not seen \emph{together} in the training data. Our analysis shows that this sort of generalization is different from \emph{i.i.d.} generalization. This connection between stitching and generalization reveals why we should not expect existing DT like methods to perform stitching, even in the limit of large datasets and models. We experimentally validate this result on carefully constructed datasets. This connection also suggests a simple remedy, the same remedy for improving generalization in supervised learning: data augmentation. We propose a naive \emph{temporal} data augmentation approach and demonstrate that adding it to RL methods based on SL enables them to stitch together experience so that they succeed in navigating between states and goals unseen together during training.
Closing the Gap between TD Learning and Supervised Learning – A Generalisation Point of View.
[ "Raj Ghugare", "Matthieu Geist", "Glen Berseth", "Benjamin Eysenbach" ]
Workshop/FMDM
[ "https://github.com/rajghugare19/stitching-is-combinatorial-generalisation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0ByowhQGUG
@inproceedings{ moskovitz2023confronting, title={Confronting Reward Model Overoptimization with Constrained {RLHF}}, author={Ted Moskovitz and Aaditya Singh and DJ Strouse and Tuomas Sandholm and Ruslan Salakhutdinov and Anca Dragan and Stephen McAleer}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=0ByowhQGUG} }
Large language models are typically aligned with human preferences by optimizing reward models (RMs) fitted to human feedback. However, human preferences are multi-faceted, and it is increasingly common to derive reward from a composition of simpler reward models which each capture a different aspect of language quality. This itself presents a challenge, as it is difficult to appropriately weight these component RMs when combining them. Compounding this difficulty, because any RM is only a proxy for human evaluation, this process is vulnerable to overoptimization, wherein past a certain point, accumulating higher reward is associated with worse human ratings. In this paper, we perform, to our knowledge, the first study on overoptimization in composite RMs, showing that correlation between component RMs has a significant effect on the locations of these points. We then introduce an approach to solve this issue using constrained reinforcement learning as a means of preventing the agent from exceeding each RM’s threshold of usefulness. Our method addresses the problem of weighting component RMs by learning dynamic weights, naturally given by the Lagrange multipliers. As a result, each RM stays within the range at which it is an effective proxy, improving evaluation performance. Finally, we introduce an adaptive method using gradient-free optimization to identify and optimize towards these points during a single run.
Confronting Reward Model Overoptimization with Constrained RLHF
[ "Ted Moskovitz", "Aaditya Singh", "DJ Strouse", "Tuomas Sandholm", "Ruslan Salakhutdinov", "Anca Dragan", "Stephen McAleer" ]
Workshop/FMDM
2310.04373
[ "https://github.com/tedmoskovitz/constrainedrl4lms" ]
https://huggingface.co/papers/2310.04373
0
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=zy8GQiHUiW
@inproceedings{ anonymous2023smartchoices, title={SmartChoices: Augmenting Software with Learned Implementations}, author={Anonymous}, booktitle={Machine Learning for Systems 2023}, year={2023}, url={https://openreview.net/forum?id=zy8GQiHUiW} }
We are living in a golden age of machine learning. Powerful models perform many tasks far better than is possible using traditional software engineering approaches alone. However, developing and deploying these models in existing software systems remains challenging. In this paper, we present SmartChoices, a novel approach to incorporating machine learning into mature software stacks easily, safely, and effectively. We highlight key design decisions and present case studies applying SmartChoices within a range of large-scale industrial systems.
SmartChoices: Augmenting Software with Learned Implementations
null
Workshop/MLSys
2304.13033
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wkAFNdzhli
@inproceedings{ anonymous2023choicebased, title={Choice-Based Learning in {JAX}}, author={Anonymous}, booktitle={Machine Learning for Systems 2023}, year={2023}, url={https://openreview.net/forum?id=wkAFNdzhli} }
Choice-based learning is a programming paradigm for expressing learning system in terms of choices and losses. We explore a practical implementation of choice-based learning in JAX by combining two techniques in a novel way: algebraic effects and the selection monad. We describe the design and implementation of our library, explore its usefulness for real-world applications like hyperparameter tuning and deep reinforcement learning, and compare it with existing approaches.
Choice-Based Learning in JAX
null
Workshop/MLSys
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=w2J8R92gjt
@inproceedings{ anonymous2023early, title={Early notice: Gen{AI}-based Datarace Fix for Real-World Golang Programs}, author={Anonymous}, booktitle={Machine Learning for Systems 2023}, year={2023}, url={https://openreview.net/forum?id=w2J8R92gjt} }
Data race detection has been a subject of extensive research for decades; the practical deployment of race detectors has also become increasingly commonplace in industrial settings. However, the focus has mainly been on the detection aspect, with relatively little attention directed toward the challenging task of autonomously repairing programs with data races. This discrepancy is understandable given the inherent complexities of fixing the data race and the substantial engineering efforts required to integrate fixes into existing workflows. In this paper, we introduce a novel closed-loop application that harnesses the power of Generative AI to fix data races automatically. Our early experiments involving this application within Uranus's internal codebase have yielded promising results. The evaluation results suggest a bright future for integrating this application into Uranus's infrastructure, potentially revolutionizing how data races are handled in large-scale software development environments.
Early notice: GenAI-based Datarace Fix for Real-World Golang Programs
null
Workshop/MLSys
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster