bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=fDJydDFcDv | @inproceedings{
lin2023mcu,
title={{MCU}: A Task-centric Framework for Open-ended Agent Evaluation in Minecraft},
author={Haowei Lin and Zihao Wang and Jianzhu Ma and Yitao Liang},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=fDJydDFcDv}
} | To pursue the goal of creating an open-ended agent in Minecraft, an open-ended game environment with unlimited possibilities, this paper introduces a novel task-centric framework named MCU for Minecraft agent evaluation. The MCU framework leverages the concept of atom tasks as fundamental building blocks, enabling the generation of diverse or evan arbitrary tasks. Within the MCU framework, each task is measured with 6 distinct difficulty scores (time consumption, operational effort, planning complexity, intricacy, creativity, novelty). These scores offer a multi-dimensional assessment of a task from different angles, and thus can reveal an agent's capability on specific facets. The difficulty scores also serve as the feature of each task, which creates a meaningful task space and unveils the relationship between tasks. For practical evaluation of Minecraft agents employing the MCU framework, we maintain two custom benchmarks, comprising tasks meticulously designed to evaluate the agents' proficiency in high-level planning and low-level control, respectively. We show that MCU has the high expressivity to cover all tasks used in recent literature on Minecraft agent, and underscores the need for advancements in areas such as creativity, precise control, and out-of-distribution generalization under the goal of open-ended Minecraft agent development. | MCU: A Task-centric Framework for Open-ended Agent Evaluation in Minecraft | [
"Haowei Lin",
"Zihao Wang",
"Jianzhu Ma",
"Yitao Liang"
] | Workshop/ALOE | 2310.08367 | [
"https://github.com/craftjarvis/mcu"
] | https://huggingface.co/papers/2310.08367 | 1 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=f6q0VHXVUl | @inproceedings{
liu2023exploration,
title={Exploration with Principles for Diverse {AI} Supervision},
author={Hao Liu and Matei Zaharia and Pieter Abbeel},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=f6q0VHXVUl}
} | Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI. While this generative AI approach has produced impressive results, it heavily leans on human supervision. Even state-of-the-art AI models like ChatGPT depend on fine-tuning through human demonstrations, demanding extensive human input and domain expertise. This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation. To address this limitation, we propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data. Drawing inspiration from the principles of unsupervised reinforcement learning (RL) pretraining, EAI achieves exploration within the natural language space. We accomplish this by harnessing large language models to assess the novelty of generated content. Our approach employs two key components: an actor that generates novel content and a critic that evaluates the generated content, offering critiques to guide the actor. Empirical evaluations demonstrate that EAI significantly boosts model performance on complex reasoning tasks, addressing the limitations of human-intensive supervision. | Exploration with Principles for Diverse AI Supervision | [
"Hao Liu",
"Matei Zaharia",
"Pieter Abbeel"
] | Workshop/ALOE | 2310.08899 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eZLOaRdlmc | @inproceedings{
wang2023toward,
title={Toward Open-ended Embodied Tasks Solving},
author={Wei Wang and Dongqi Han and Xufang Luo and Yifei Shen and Charles Ling and Boyu Wang and Dongsheng Li},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=eZLOaRdlmc}
} | Empowering embodied agents, such as robots, with Artificial Intelligence (AI) has become increasingly important in recent years. A major challenge is task open-endedness. In practice, robots often need to perform tasks with novel goals that are multifaceted, dynamic, lack a definitive "end-state", and were not encountered during training. To tackle this problem, this paper introduces \textit{Diffusion for Open-ended Goals} (DOG), a novel framework designed to enable embodied AI to plan and act flexibly and dynamically for open-ended task goals. DOG synergizes the generative prowess of diffusion models with state-of-the-art, training-free guidance techniques to adaptively perform online planning and control. Our evaluations demonstrate that DOG can handle various kinds of novel task goals not seen during training, in both maze navigation and robot control problems. Our work sheds light on enhancing embodied AI's adaptability and competency in tackling open-ended goals. | Toward Open-ended Embodied Tasks Solving | [
"Wei Wang",
"Dongqi Han",
"Xufang Luo",
"Yifei Shen",
"Charles Ling",
"Boyu Wang",
"Dongsheng Li"
] | Workshop/ALOE | 2312.05822 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dWiYbyqfJP | @inproceedings{
samsami2023mastering,
title={Mastering Memory Tasks with World Models},
author={Mohammad Reza Samsami and Artem Zholus and Janarthanan Rajendran and Sarath Chandar},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=dWiYbyqfJP}
} | Current model-based reinforcement learning (MBRL) agents struggle with long-term dependencies. This limits their ability to effectively solve tasks involving extended time gaps between actions and outcomes, or tasks demanding the recalling of distant observations to inform current actions. To improve temporal coherence, we integrate a new family of state space models (SSMs) in world models of MBRL agents to present a new method, Recall to Imagine (R2I). This integration aims to enhance both long-term memory and long-horizon credit assignment. Through a diverse set of illustrative tasks, we systematically demonstrate that R2I establishes a new state-of-the-art performance in challenging memory and credit assignment RL tasks, such as Memory Maze, BSuite, and POPGym. At the same time, it upholds comparable performance in classic RL tasks, such as Atari and DMC, suggesting the generality of our method. We also show that R2I is faster than the state-of-the-art MBRL method, DreamerV3, resulting in faster wall-time convergence. | Mastering Memory Tasks with World Models | [
"Mohammad Reza Samsami",
"Artem Zholus",
"Janarthanan Rajendran",
"Sarath Chandar"
] | Workshop/ALOE | 2403.04253 | [
"https://github.com/chandar-lab/Recall2Imagine"
] | https://huggingface.co/papers/2403.04253 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=dVWBvvUhhI | @inproceedings{
jin2023minibehavior,
title={Mini-{BEHAVIOR}: A Procedurally Generated Benchmark for Long-horizon Decision-Making in Embodied {AI}},
author={Emily Jin and Jiaheng Hu and Zhuoyi Huang and Ruohan Zhang and Jiajun Wu and Li Fei-Fei and Roberto Mart{\'\i}n-Mart{\'\i}n},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=dVWBvvUhhI}
} | We present Mini-BEHAVIOR, a novel benchmark for embodied AI that challenges agents to use reasoning and decision-making skills to solve complex activities that resemble everyday human challenges. The Mini-BEHAVIOR environment is a fast, realistic Gridworld environment that offers the benefits of rapid prototyping and ease of use while preserving a symbolic level of physical realism and complexity found in complex embodied AI benchmarks. We introduce key features such as procedural generation, to enable the creation of countless task variations and support open-ended learning. Mini-BEHAVIOR provides implementations of various household tasks from the original BEHAVIOR benchmark, along with starter code for data collection and reinforcement learning agent training. In essence, Mini-BEHAVIOR offers a fast, open-ended benchmark for evaluating decision-making and planning solutions in embodied AI. It serves as a user-friendly entry point for research and facilitates the evaluation and development of solutions, simplifying their assessment and development while advancing the field of embodied AI. Code is available at https://github.com/StanfordVL/mini_behavior. | Mini-BEHAVIOR: A Procedurally Generated Benchmark for Long-horizon Decision-Making in Embodied AI | [
"Emily Jin",
"Jiaheng Hu",
"Zhuoyi Huang",
"Ruohan Zhang",
"Jiajun Wu",
"Li Fei-Fei",
"Roberto Martín-Martín"
] | Workshop/ALOE | 2310.01824 | [
"https://github.com/stanfordvl/mini_behavior"
] | https://huggingface.co/papers/2310.01824 | 0 | 1 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=cbcO9qihOT | @inproceedings{
bradley2023qualitydiversity,
title={Quality-Diversity through {AI} Feedback},
author={Herbie Bradley and Andrew Dai and Hannah Benita Teufel and Jenny Zhang and Koen Oostermeijer and Marco Bellagente and Jeff Clune and Kenneth Stanley and Gregory Schott and Joel Lehman},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=cbcO9qihOT}
} | In many text-generation problems, users may prefer not only a single response, but a diverse range of high-quality outputs from which to choose. Quality-diversity (QD) search algorithms aim at such outcomes, by continually improving and diversifying a population of candidates. However, the applicability of QD to qualitative domains, like creative writing, has been limited by the difficulty of algorithmically specifying measures of quality and diversity. Interestingly, recent developments in language models (LMs) have enabled guiding search through \emph{AI feedback}, wherein LMs are prompted in natural language to evaluate qualitative aspects of text. Leveraging this development, we introduce Quality-Diversity through AI Feedback (QDAIF), wherein an evolutionary algorithm applies LMs to both generate variation and evaluate the quality and diversity of candidate text. In all but one creative writing domain, QDAIF covers more of a specified search space with high-quality samples than do non-QD controls. Further, human evaluation of QDAIF-generated creative texts validates reasonable agreement between AI and human evaluation. Our results thus highlight the potential of AI feedback to guide open-ended search for creative and original solutions, providing a recipe that seemingly generalizes to many domains and modalities. In this way, QDAIF is a step towards AI systems that can independently search, diversify, evaluate, and improve, which are among the core skills underlying human society's capacity for innovation. | Quality-Diversity through AI Feedback | [
"Herbie Bradley",
"Andrew Dai",
"Hannah Benita Teufel",
"Jenny Zhang",
"Koen Oostermeijer",
"Marco Bellagente",
"Jeff Clune",
"Kenneth Stanley",
"Gregory Schott",
"Joel Lehman"
] | Workshop/ALOE | 2310.13032 | [
""
] | https://huggingface.co/papers/2310.13032 | 2 | 1 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=buNbHiiB3j | @inproceedings{
ding2023quality,
title={Quality Diversity through Human Feedback},
author={Li Ding and Jenny Zhang and Jeff Clune and Lee Spector and Joel Lehman},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=buNbHiiB3j}
} | Reinforcement Learning from Human Feedback (RLHF) has shown potential in qualitative tasks where clear objectives are lacking. However, its effectiveness is not fully realized when it is conceptualized merely as a tool to optimize average human preferences, especially in generative tasks that demand diverse model responses. Meanwhile, Quality Diversity (QD) algorithms excel at identifying diverse and high-quality solutions but often rely on manually crafted diversity metrics. This paper introduces Quality Diversity through Human Feedback (QDHF), a novel approach integrating human feedback into the QD framework. QDHF infers diversity metrics from human judgments of similarity among solutions, thereby enhancing the applicability and effectiveness of QD algorithms. Our empirical studies show that QDHF significantly outperforms state-of-the-art methods in automatic diversity discovery and matches the efficacy of using manually crafted metrics for QD on standard benchmarks in robotics and reinforcement learning. Notably, in a latent space illumination task, QDHF substantially enhances the diversity in images generated by a diffusion model and was more favorably received in user studies. We conclude by analyzing QDHF's scalability and the quality of its derived diversity metrics, emphasizing its potential to improve exploration and diversity in complex, open-ended optimization tasks. Source code is available on GitHub: https://github.com/ld-ing/qdhf. | Quality Diversity through Human Feedback | [
"Li Ding",
"Jenny Zhang",
"Jeff Clune",
"Lee Spector",
"Joel Lehman"
] | Workshop/ALOE | [
"https://github.com/ld-ing/qdhf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=acD8BxMjwV | @inproceedings{
ingvarsson2023mixme,
title={Mix-{ME}: Quality-Diversity for Multi-Agent Learning},
author={Gar{\dh}ar Ingvarsson and Mikayel Samvelyan and Manon Flageat and Bryan Lim and Antoine Cully and Tim Rockt{\"a}schel},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=acD8BxMjwV}
} | In many real-world systems, such as adaptive robotics, achieving a single, optimised solution may be insufficient. Instead, a diverse set of high-performing solutions is often required to adapt to varying contexts and requirements. This is the realm of Quality-Diversity (QD), which aims to discover a collection of high-performing solutions, each with their own unique characteristics. QD methods have recently seen success in many domains, including robotics, where they have been used to discover damage-adaptive locomotion controllers. However, most existing work has focused on single-agent settings, despite many tasks of interest being multi-agent. To this end, we introduce Mix-ME, a novel multi-agent variant of the popular MAP-Elites algorithm that forms new solutions using a crossover-like operator by mixing together agents from different teams. We evaluate the proposed methods on a variety of partially observable continuous control tasks. Our evaluation shows that these multi-agent variants obtained by Mix-ME not only compete with single-agent baselines but also often outperform them in multi-agent settings under partial observability. | Mix-ME: Quality-Diversity for Multi-Agent Learning | [
"Garðar Ingvarsson",
"Mikayel Samvelyan",
"Manon Flageat",
"Bryan Lim",
"Antoine Cully",
"Tim Rocktäschel"
] | Workshop/ALOE | 2311.01829 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aFEZdGL3gn | @inproceedings{
du2023what,
title={What can {AI} Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration},
author={Yuqing Du and Eliza Kosoy and Alyssa Li Dayan and Maria Rufova and Alison Gopnik and Pieter Abbeel},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=aFEZdGL3gn}
} | What drives exploration? Understanding intrinsic motivation is a long-standing question in both cognitive science and artificial intelligence (AI); numerous exploration objectives have been proposed and tested in human experiments and used to train reinforcement learning (RL) agents. However, experiments in the former are often in simplistic environments that do not capture the complexity of real world exploration. On the other hand, experiments in the latter use more complex environments, yet the trained RL agents fail to come close to human exploration efficiency. To study this gap, we propose a framework for directly comparing human and agent exploration in an open-ended environment, Crafter. We study how well commonly-proposed information theoretic intrinsic objectives relate to actual human and agent behaviors, finding that they consistently correlate with measures of exploration success in both humans and intrinsically-motivated agents. However, all agents perform significantly worse than adults on the information theoretic objectives, especially Information Gain, suggesting that better intrinsic reward design may help unsupervised agents explore more effectively. We also collect transcripts during play, and in a preliminary analysis of self-talk, we find that children's verbalizations of goals show a strong positive correlation with Empowerment, suggesting that goal-setting may be an important aspect of efficient exploration. | What can AI Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration | [
"Yuqing Du",
"Eliza Kosoy",
"Alyssa Li Dayan",
"Maria Rufova",
"Alison Gopnik",
"Pieter Abbeel"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Zi3LcfuTPx | @inproceedings{
garcin2023how,
title={How the level sampling process impacts zero-shot generalisation in deep reinforcement learning},
author={Samuel Garcin and James Doran and Shangmin Guo and Christopher Lucas and Stefano Albrecht},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=Zi3LcfuTPx}
} | A key limitation preventing the wider adoption of autonomous agents trained via deep reinforcement learning (RL) is their limited ability to generalise to new environments, even when these share similar characteristics with environments encountered during training. In this work, we investigate how a non-uniform sampling strategy of individual environment instances, or levels, affects the zero-shot generalisation (ZSG) ability of RL agents, considering two failure modes: overfitting and over-generalisation. As a first step, we measure the mutual information (MI) between the agent's internal representation and the set of training levels, which we find to be well-correlated to instance overfitting. In contrast to uniform sampling, adaptive sampling strategies prioritising levels based on their value loss are more effective at maintaining lower MI, which provides a novel theoretical justification for this class of techniques. We then turn our attention to unsupervised environment design (UED) methods, which adaptively generate new training levels and minimise MI more effectively than methods sampling from a fixed set. However, we find UED methods significantly shift the training distribution, resulting in over-generalisation and worse ZSG performance over the distribution of interest. To prevent both instance overfitting and over-generalisation, we introduce self-supervised environment design (SSED). SSED generates levels using a variational autoencoder, effectively reducing MI while minimising the shift with the distribution of interest, and leads to statistically significant improvements in ZSG over fixed-set level sampling strategies and UED methods. | How the level sampling process impacts zero-shot generalisation in deep reinforcement learning | [
"Samuel Garcin",
"James Doran",
"Shangmin Guo",
"Christopher G. Lucas",
"Stefano V Albrecht"
] | Workshop/ALOE | 2310.03494 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Xw1hVTWxxQ | @inproceedings{
chan2023visionlanguage,
title={Vision-Language Models as a Source of Rewards},
author={Harris Chan and Volodymyr Mnih and Feryal Behbahani and Michael Laskin and Luyu Wang and Fabio Pardo and Maxime Gazeau and Himanshu Sahni and Dan Horgan and Kate Baumli and Yannick Schroecker and Stephen Spencer and Richie Steigerwald and John Quan and Gheorghe Comanici and Sebastian Flennerhag and Alexander Neitz and Lei M Zhang and Tom Schaul and Satinder Singh and Clare Lyle and Tim Rockt{\"a}schel and Jack Parker-Holder and Kristian Holsheimer},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=Xw1hVTWxxQ}
} | Building generalist agents that can accomplish many goals in rich open-ended environments is one of the research frontiers for reinforcement learning. A key limiting factor for building generalist agents with RL has been the need for a large number of reward functions for achieving different goals. We investigate the feasibility of using off-the-shelf vision-language models, or VLMs, as sources of rewards for reinforcement learning agents. We show how rewards for visual achievement of a variety of language goals can be derived from the CLIP family of models, and used to train RL agents that can achieve a variety of language goals. We showcase this approach in two distinct visual domains and present a scaling trend showing how larger VLMs lead to more accurate rewards for visual goal achievement, which in turn produces more capable RL agents. | Vision-Language Models as a Source of Rewards | [
"Kate Baumli",
"Satinder Singh",
"Feryal Behbahani",
"Harris Chan",
"Gheorghe Comanici",
"Sebastian Flennerhag",
"Maxime Gazeau",
"Kristian Holsheimer",
"Dan Horgan",
"Michael Laskin",
"Clare Lyle",
"Volodymyr Mnih",
"Alexander Neitz",
"Fabio Pardo",
"Jack Parker-Holder",
"John Quan",
"Tim Rocktäschel",
"Himanshu Sahni",
"Tom Schaul",
"Yannick Schroecker",
"Stephen Spencer",
"Richie Steigerwald",
"Luyu Wang",
"Lei M Zhang"
] | Workshop/ALOE | 2312.09187 | [
""
] | https://huggingface.co/papers/2312.09187 | 3 | 11 | 8 | 26 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=XUohU3mYQ5 | @inproceedings{
jackson2023discovering,
title={Discovering Temporally-Aware Reinforcement Learning Algorithms},
author={Matthew Jackson and Chris Lu and Louis Kirsch and Robert Lange and Shimon Whiteson and Jakob Foerster},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=XUohU3mYQ5}
} | Recent advancements in meta-learning have enabled the automatic discovery of novel reinforcement learning algorithms parameterized by surrogate objective functions. To improve upon manually designed algorithms, the parameterization of this learned objective function must be expressive enough to represent novel principles of learning (instead of merely recovering already established ones) while still generalizing to a wide range of settings outside of its meta-training distribution. However, existing methods focus on discovering objective functions that, like many widely used objective functions in reinforcement learning, do not take into account the total number of steps allowed for training, or “training horizon”. In contrast, humans use a plethora of different learning objectives across the course of acquiring a new ability. For instance, students may alter their studying techniques based on the proximity to exam deadlines and their self-assessed capabilities. This paper contends that ignoring the optimization time horizon significantly restricts the expressive potential of discovered learning algorithms. We propose a simple augmentation to two existing objective discovery approaches that allows the discovered algorithm to dynamically update its objective function throughout the agent’s training procedure, resulting in expressive schedules and increased generalization across different training horizons. In the process, we find that commonly used meta-gradient approaches fail to discover such adaptive objective functions while evolution strategies discover highly dynamic learning rules. We demonstrate the effectiveness of our approach on a wide range of tasks and analyze the resulting learned algorithms, which we find effectively balance exploration and exploitation by modifying the structure of their learning rules throughout the agent’s lifetime. | Discovering Temporally-Aware Reinforcement Learning Algorithms | [
"Matthew Thomas Jackson",
"Chris Lu",
"Louis Kirsch",
"Robert Tjarko Lange",
"Shimon Whiteson",
"Jakob Nicolaus Foerster"
] | Workshop/ALOE | 2402.05828 | [
"https://github.com/EmptyJackson/groove"
] | https://huggingface.co/papers/2402.05828 | 2 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=WSrRF5Wy6v | @inproceedings{
gou2023critic,
title={{CRITIC}: Large Language Models Can Self-Correct with Tool-Interactive Critiquing},
author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Nan Duan and Weizhu Chen},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=WSrRF5Wy6v}
} | Recent developments in large language models (LLMs) have been impressive. However, these models sometimes show inconsistencies and problematic behavior, such as hallucinating facts, generating flawed code, or creating offensive and toxic content. Unlike these models, humans typically utilize external tools to cross-check and refine their initial content, like using a search engine for fact-checking, or a code interpreter for debugging. Inspired by this observation, we introduce a framework called CRITIC that allows LLMs, which are essentially “black boxes” to validate and progressively amend their own outputs in a manner similar to human interaction with tools. More specifically, starting with an initial output, CRITIC interacts with appropriate tools to evaluate certain aspects of the text, and then revises the output based on the feedback obtained during this validation process. Comprehensive evaluations involving free-form question answering, mathematical program synthesis, and toxicity reduction demonstrate that CRITIC consistently enhances the performance of LLMs. Meanwhile, our research highlights the crucial importance of external feedback in promoting the ongoing self-improvement of LLMs. | CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing | [
"Zhibin Gou",
"Zhihong Shao",
"Yeyun Gong",
"yelong shen",
"Yujiu Yang",
"Nan Duan",
"Weizhu Chen"
] | Workshop/ALOE | 2305.11738 | [
"https://github.com/microsoft/ProphetNet"
] | https://huggingface.co/papers/2305.11738 | 3 | 6 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=VjqNosdlAn | @inproceedings{
sims2023ravl,
title={{RAVL}: Reach-Aware Value Learning for the Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning},
author={Anya Sims and Cong Lu and Yee Whye Teh},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=VjqNosdlAn}
} | Training generalist agents requires learning in complex, open-ended environments. In the real world, as well as in standard benchmarks, such environments often come with large quantities of pre-collected behavioral data. Offline reinforcement learning presents an exciting possibility for leveraging this existing data to kickstart subsequent expensive open-ended learning. Using offline data with RL, however, introduces the additional challenge of evaluating values for state-actions not seen in the dataset -- termed the out-of-sample problem. One solution to this is by allowing the agent to generate additional synthetic data through rollouts in a learned dynamics model. The prevailing theoretical understanding is that this effectively resolves the out-of-sample issue, and that any remaining difficulties are due to errors in the learned dynamics model. Based on this understanding, one would expect improvements to the dynamics model to lead to improvements to the learned policy. Surprisingly, however, we find that existing algorithms completely fail when the true dynamics are provided in place of the learned dynamics model. This observation exposes a common misconception in offline reinforcement learning, namely that dynamics model errors do not explain the behavior of model-based methods. Our subsequent investigation reveals a second major and previously overlooked issue in offline model-based reinforcement learning (which we term the edge-of-reach problem). Guided by this new insight, we propose Reach-Aware Value Learning (RAVL), a value-based algorithm that is able to capture value uncertainty at edge-of-reach states and resolve the edge-of-reach problem. Our method achieves strong performance on the standard D4RL benchmark, and we hope that the insights developed in this paper help to advance offline RL in order for it to serve as an easily applicable pre-training technique for open-ended settings. | The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning | [
"Anya Sims",
"Cong Lu",
"Yee Whye Teh"
] | Workshop/ALOE | 2402.12527 | [
"https://github.com/anyasims/edge-of-reach-ravl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=V7Ao0FdXEn | @inproceedings{
castanyer2023improving,
title={Improving Intrinsic Exploration by Creating Stationary Objectives},
author={Roger Creus Castanyer and Joshua Romoff and Glen Berseth},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=V7Ao0FdXEn}
} | Exploration bonuses in reinforcement learning guide long-horizon exploration by defining custom intrinsic objectives. Several exploration objectives like count-based bonuses, pseudo-counts, and state-entropy maximization are non-stationary and hence are difficult to optimize for the agent. While this issue is generally known, it is usually omitted and solutions remain under-explored. The key contribution of our work lies in transforming the original non-stationary rewards into stationary rewards through an augmented state representation. For this purpose, we introduce the Stationary Objectives For Exploration (SOFE) framework. SOFE requires identifying sufficient statistics for different exploration bonuses and finding an efficient encoding of these statistics to use as input to a deep network. SOFE is based on proposing state augmentations that expand the state space but hold the promise of simplifying the optimization of the agent's objective. We show that SOFE improves the performance of several exploration objectives, including count-based bonuses, pseudo-counts, and state-entropy maximization. Moreover, SOFE outperforms prior methods that attempt to stabilize the optimization of intrinsic objectives. We demonstrate the efficacy of SOFE in hard-exploration problems, including sparse-reward tasks, pixel-based observations, 3D navigation, and procedurally generated environments. | Improving Intrinsic Exploration by Creating Stationary Objectives | [
"Roger Creus Castanyer",
"Joshua Romoff",
"Glen Berseth"
] | Workshop/ALOE | 2310.18144 | [
""
] | https://huggingface.co/papers/2310.18144 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=UBxh6uhyuc | @inproceedings{
tio2023training,
title={Training Reinforcement Learning Agents and Humans with Difficulty-Conditioned Generators},
author={Sidney Tio and Pradeep Varakantham},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=UBxh6uhyuc}
} | We introduce Parameterized Environment Response Model (PERM), a method for training both Reinforcement Learning (RL) Agents and human learners in parameterized environments by directly modeling difficulty and ability. Inspired by Item Response Theory (IRT), PERM aligns environment difficulty with individual ability, creating a Zone of Proximal Development-based curriculum. Remarkably, PERM operates without real-time RL updates and allows for offline training, ensuring its adaptability across diverse students. We present a two-stage training process that capitalizes on PERM's adaptability, and demonstrate its effectiveness in training RL agents and humans in an empirical study. | Training Reinforcement Learning Agents and Humans with Difficulty-Conditioned Generators | [
"Sidney Tio",
"Pradeep Varakantham"
] | Workshop/ALOE | 2312.02309 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RFUiBPyGYF | @inproceedings{
ma2023eureka,
title={Eureka: Human-Level Reward Design via Coding Large Language Models},
author={Yecheng Jason Ma and William Liang and Guanzhi Wang and De-An Huang and Osbert Bastani and Dinesh Jayaraman and Yuke Zhu and Linxi Fan and Anima Anandkumar},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=RFUiBPyGYF}
} | Large Language Models (LLMs) have excelled as high-level semantic planners for sequential decision-making tasks. However, harnessing them to learn complex low-level manipulation tasks, such as dexterous pen spinning, remains an open problem. We bridge this fundamental gap and present Eureka, a human-level reward design algorithm powered by LLMs. Eureka exploits the remarkable zero-shot generation, code-writing, and in-context improvement capabilities of state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over reward code. The resulting rewards can then be used to acquire complex skills via reinforcement learning. Without any task-specific prompting or pre-defined reward templates, Eureka generates reward functions that outperform expert human-engineered rewards. In a diverse suite of 29 open-source RL environments that include 10 distinct robot morphologies, Eureka outperforms human experts on 83% of the tasks, leading to an average normalized improvement of 52%. The generality of Eureka also enables a new gradient-free in-context learning approach to reinforcement learning from human feedback (RLHF), readily incorporating human inputs to improve the quality and the safety of the generated rewards without model updating. Finally, using Eureka rewards in a curriculum learning setting, we demonstrate for the first time, a simulated Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a pen in circles at rapid speed. | Eureka: Human-Level Reward Design via Coding Large Language Models | [
"Yecheng Jason Ma",
"William Liang",
"Guanzhi Wang",
"De-An Huang",
"Osbert Bastani",
"Dinesh Jayaraman",
"Yuke Zhu",
"Linxi Fan",
"Anima Anandkumar"
] | Workshop/ALOE | 2310.12931 | [
"https://github.com/eureka-research/Eureka"
] | https://huggingface.co/papers/2310.12931 | 5 | 26 | 3 | 9 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=Lg3DGZSsOF | @inproceedings{
earle2023quality,
title={Quality Diversity in the Amorphous Fortress: Evolving for Complexity in 0-Player Games},
author={Sam Earle and M Charity and Julian Togelius and Dipika Rajesh},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=Lg3DGZSsOF}
} | We explore the generation of diverse environments using the Amorphous Fortress (AF) simulation framework. AF defines a set of Finite State Machine (FSM) nodes and edges that can be recombined to control the behavior of agents in the `fortress' grid-world. The behaviors and conditions of the agents within the framework are designed to capture the common building blocks of multi-agent artificial life and reinforcement learning environments. Using quality diversity evolutionary search, we generate diverse sets of environments that exhibit dynamics exhibiting certain types of complexity according to measures of agents' FSM architectures and activations, and collective behaviors. QD-AF generates families of 0-player akin to simplistic ecological models, and we identify the emergence of both competitive and co-operative multi-agent and multi-species survival dynamics. We argue that these generated worlds can collectively serve as training and testing grounds for learning algorithms. | Quality Diversity in the Amorphous Fortress: Evolving for Complexity in 0-Player Games | [
"Sam Earle",
"M Charity",
"Julian Togelius",
"Dipika Rajesh"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=L9KATLgYvB | @inproceedings{
zhou2023sotopia,
title={{SOTOPIA}: Interactive Evaluation for Social Intelligence in Language Agents},
author={Xuhui Zhou and Hao Zhu and Leena Mathur and Ruohong Zhang and Haofei Yu and Zhengyang Qi and Louis-Philippe Morency and Yonatan Bisk and Daniel Fried and Graham Neubig and Maarten Sap},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=L9KATLgYvB}
} | Humans are social beings; we pursue social goals in our daily interactions, which is a crucial aspect of social intelligence. Yet, AI systems' abilities in this realm remain elusive. We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence. In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals. We simulate the role-play interaction between LLM-based agents and humans within this task space and evaluate their performance with a holistic evaluation framework called SOTOPIA-Eval. With SOTOPIA, we find significant differences between these models in terms of their social intelligence, and we identify a subset of SOTOPIA scenarios, SOTOPIA-hard, that is generally challenging for all models. We find that on this subset, GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills. These findings demonstrate SOTOPIA's promise as a general platform for research on evaluating and improving social intelligence in artificial agents. | SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents | [
"Xuhui Zhou",
"Hao Zhu",
"Leena Mathur",
"Ruohong Zhang",
"Haofei Yu",
"Zhengyang Qi",
"Louis-Philippe Morency",
"Yonatan Bisk",
"Daniel Fried",
"Graham Neubig",
"Maarten Sap"
] | Workshop/ALOE | 2310.11667 | [
""
] | https://huggingface.co/papers/2310.11667 | 2 | 2 | 0 | 11 | [] | [
"cmu-lti/sotopia"
] | [] | [] | [
"cmu-lti/sotopia"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=Kzs8sBUzJ8 | @inproceedings{
yenamandra2023homerobot,
title={HomeRobot: Open-Vocabulary Mobile Manipulation},
author={Sriram Yenamandra and Arun Ramachandran and Karmesh Yadav and Austin S Wang and Mukul Khanna and Theophile Gervet and Tsung-Yen Yang and Vidhi Jain and Alexander Clegg and John M Turner and Zsolt Kira and Manolis Savva and Angel X Chang and Devendra Singh Chaplot and Dhruv Batra and Roozbeh Mottaghi and Yonatan Bisk and Chris Paxton},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=Kzs8sBUzJ8}
} | HomeRobot (noun): An affordable compliant robot that navigates homes and manipulates a wide range of objects in order to complete everyday tasks.
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location. This is a foundational challenge for robots to be useful assistants in human environments, because it involves tackling sub-problems from across robotics: perception, language understanding, navigation, and manipulation are all essential to OVMM and for eventually building robust open-ended learning systems. In addition, integration of the solutions to these sub-problems poses its own substantial challenges. To drive research in this area, we introduce the HomeRobot OVMM benchmark, where an agent navigates household environments to grasp novel objects and place them on target receptacles. HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch to encourage replication of real-world experiments across labs. We implement both reinforcement learning and heuristic (model-based) baselines and show evidence of sim-to-real transfer. Our baselines achieve a 20% success rate in the real world; our experiments identify ways future research work improve performance. | HomeRobot: Open-Vocabulary Mobile Manipulation | [
"Sriram Yenamandra",
"Arun Ramachandran",
"Karmesh Yadav",
"Austin S Wang",
"Mukul Khanna",
"Theophile Gervet",
"Tsung-Yen Yang",
"Vidhi Jain",
"Alexander Clegg",
"John M Turner",
"Zsolt Kira",
"Manolis Savva",
"Angel X Chang",
"Devendra Singh Chaplot",
"Dhruv Batra",
"Roozbeh Mottaghi",
"Yonatan Bisk",
"Chris Paxton"
] | Workshop/ALOE | 2306.11565 | [
""
] | https://huggingface.co/papers/2306.11565 | 13 | 15 | 0 | 18 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=JlBBoZBOeF | @inproceedings{
chopra2023agenttorch,
title={AgentTorch: Agent-based Modeling with Automatic Differentiation},
author={Ayush Chopra and Jayakumar Subramanian and Balaji Krishnamurthy and Ramesh Raskar},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=JlBBoZBOeF}
} | Agent-based models (ABMs) are discrete simulators comprising agents that can act and interact in a computational world. ABMs are relevant across several disciplines as these agents can be cells in bio-electric networks, humans in physical networks, or even AI avatars in digital networks. Despite wide applicability, research in ABMs has been extremely fragmented and has not benefited from modern computational advances, especially automatic differentiation. This paper presents AgentTorch: a framework to design, simulate, and optimize agent-based models. AgentTorch definition can be used to build stochastic, non-linear ABMs across digital, biological, and physical realms; while ensuring gradient flow through all simulation steps. AgentTorch simulations are fully tensorized, execute on GPUs
and can range from a few hundred agents in synthetic grids to millions of agents in real-world contact graphs. The end-to-end differentiability of AgentTorch enables automatic differentiation of simulation parameters and integration with deep neural networks (DNNs) in several ways, for both supervised and reinforcement learning. We validate AgentTorch through multiple case studies that study cell morphogenesis over bio-electric networks, infection disease epidemiology over physical networks and opinion dynamics over social networks. AgentTorch is designed to be a viable toolkit for scientific exploration and real-world policy decision-making. We hope AgentTorch can help bridge research in AI and agent-based modeling. | AgentTorch: Agent-based Modeling with Automatic Differentiation | [
"Ayush Chopra",
"Jayakumar Subramanian",
"Balaji Krishnamurthy",
"Ramesh Raskar"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=JK94vOwU29 | @inproceedings{
wang2023diversity,
title={Diversity from Human Feedback},
author={Ren-Jian Wang and Ke Xue and Yutong Wang and Peng Yang and Haobo Fu and QIANG FU and Chao Qian},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=JK94vOwU29}
} | Diversity plays a significant role in many problems, such as ensemble learning, reinforcement learning, and combinatorial optimization. Though having great many successful applications in machine learning, most methods need to define a proper behavior space, which is, however, challenging for the human in many scenarios. In this paper, we propose the problem of learning a behavior space from human feedback and introduce a general method called Diversity from Human Feedback (DivHF) to solve it. DivHF learns a behavior descriptor function consistent with human preference by querying human feedback. The learned behavior descriptor can be combined with any distance measure to define a diversity measure. We demonstrate the effectiveness of DivHF by integrating it with the Quality-Diversity optimization algorithm MAP-Elites and conducting experiments on the QDax suite. The results show that DivHF learns a behavior space that aligns better with human requirements compared to direct data-driven approaches and leads to more diverse solutions under human preference. Our contributions include formulating the problem, proposing the DivHF method, and demonstrating its effectiveness through experiments. | Diversity from Human Feedback | [
"Ren-Jian Wang",
"Ke Xue",
"Yutong Wang",
"Peng Yang",
"Haobo Fu",
"QIANG FU",
"Chao Qian"
] | Workshop/ALOE | 2310.06648 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HBvDqNF7sn | @inproceedings{
kapturowski2023unlocking,
title={Unlocking the Power of Representations in Long-term Novelty-based Exploration},
author={Steven Kapturowski and Alaa Saade and Daniele Calandriello and Charles Blundell and Pablo Sprechmann and Leopoldo Sarra and Oliver Groth and Michal Valko and Bilal Piot},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=HBvDqNF7sn}
} | We introduce Robust Exploration via Clustering-based Online Density Estimation (RECODE), a non-parametric method for novelty-based exploration that estimates visitation counts for clusters of states based on their similarity in a chosen embedding space. By adapting classical clustering to the nonstationary setting of Deep RL, RECODE can efficiently track state visitation counts over thousands of episodes. We further propose a novel generalization of the inverse dynamics loss, which leverages masked transformer architectures for multi-step prediction; which in conjunction with RECODE achieves a new state-of-the-art in a suite of challenging 3D-exploration tasks in DM-HARD-8. RECODE also sets new state-of-the-art in hard exploration Atari games, and is the first agent to reach the end screen in Pitfall! | Unlocking the Power of Representations in Long-term Novelty-based Exploration | [
"Steven Kapturowski",
"Alaa Saade",
"Daniele Calandriello",
"Charles Blundell",
"Pablo Sprechmann",
"Leopoldo Sarra",
"Oliver Groth",
"Michal Valko",
"Bilal Piot"
] | Workshop/ALOE | 2305.01521 | [
""
] | https://huggingface.co/papers/2305.01521 | 1 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GELhTb5iHy | @inproceedings{
vlastelica2023diverse,
title={Diverse Offline Imitation Learning},
author={Marin Vlastelica and Jin Cheng and Georg Martius and Pavel Kolev},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=GELhTb5iHy}
} | There has been significant recent progress in the area of unsupervised skill discovery, utilizing various information-theoretic objectives as measures of diversity. Despite these advances, challenges remain: current methods require significant online interaction, fail to leverage vast amounts of available task-agnostic data and typically lack a quantitative measure of skill utility. We address these challenges by proposing a principled offline algorithm for unsupervised skill discovery that, in addition to maximizing diversity, ensures that each learned skill imitates state-only expert demonstrations to a certain degree. Our main analytical contribution is to connect Fenchel duality, reinforcement learning, and unsupervised skill discovery to maximize a mutual information objective subject to KL-divergence state occupancy constraints. Furthermore, we demonstrate the effectiveness of our method on the standard offline benchmark D4RL and on a custom offline dataset collected from a 12-DoF quadruped robot for which the policies trained in simulation transfer well to the real robotic system. | Diverse Offline Imitation Learning | [
"Marin Vlastelica",
"Jin Cheng",
"Georg Martius",
"Pavel Kolev"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BpneszmNir | @inproceedings{
xiang2023from,
title={From Centralized to Self-Supervised: Pursuing Realistic Multi-Agent Reinforcement Learning},
author={Violet Xiang and Logan Cross and Jan-Philipp Fr{\"a}nken and Nick Haber},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=BpneszmNir}
} | In real-world environments, autonomous agents rely on their egocentric observations. They must learn adaptive strategies to interact with others who possess mixed motivations, discernible only through visible cues. Several Multi-Agent Reinforcement Learning (MARL) methods adopt centralized approaches that involve either centralized training or reward-sharing, often violating the realistic ways in which living organisms, like animals or humans, process information and interact. MARL strategies deploying decentralized training with intrinsic motivation offer a self-supervised approach, enable agents to develop flexible social strategies through the interaction of autonomous agents. However, by contrasting the self-supervised and centralized methods, we reveal that populations trained with reward-sharing methods surpass those using self-supervised methods in a mixed-motive environment. We link this superiority to specialized role emergence and an agent's expertise in its role. Interestingly, this gap shrinks in pure-motive settings, emphasizing the need for evaluations in more complex, realistic environments (mixed-motive). Our preliminary results suggest a gap in population performance that can be closed by improving self-supervised methods and thereby pushing MARL closer to real-world readiness. | From Centralized to Self-Supervised: Pursuing Realistic Multi-Agent Reinforcement Learning | [
"Violet Xiang",
"Logan Cross",
"Jan-Philipp Fränken",
"Nick Haber"
] | Workshop/ALOE | 2312.08662 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BlhQN9Jfpf | @inproceedings{
rutherford2023jaxmarl,
title={Jax{MARL}: Multi-Agent {RL} Environments in {JAX}},
author={Alexander Rutherford and Benjamin Ellis and Matteo Gallici and Jonathan Cook and Andrei Lupu and Gar{\dh}ar Ingvarsson and Timon Willi and Akbir Khan and Christian Schroeder de Witt and Alexandra Souly and Saptarashmi Bandyopadhyay and Mikayel Samvelyan and Minqi Jiang and Robert Tjarko Lange and Shimon Whiteson and Bruno Lacerda and Nick Hawes and Tim Rockt{\"a}schel and Chris Lu and Jakob Nicolaus Foerster},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=BlhQN9Jfpf}
} | Benchmarks play an important role in the development of machine learning algorithms. Reinforcement learning environments are traditionally run on the CPU, limiting their scalability with typical academic compute. However, recent advancements in JAX have enabled the wider use of hardware acceleration to overcome these computational hurdles by producing massively parallel RL training pipelines and environments.
This is particularly useful for multi-agent reinforcement learning (MARL) research where not only multiple agents must be considered at each environment step, adding additional computational burden, but also the sample complexity is increased due to non-stationarity, decentralised partial observability, or other MARL challenges.
In this paper, we present JaxMARL, the first open-source code base that combines ease-of-use with GPU enabled efficiency, and supports a large number of commonly used MARL environments as well as popular baseline algorithms.
Our experiments show that our JAX-based implementations are up to 1400x faster than existing single-threaded baselines. This enables efficient and thorough evaluations, with the potential to alleviate the *evaluation crisis* of the field.
We also introduce and benchmark SMAX, a vectorised, simplified version of the StarCraft Multi-Agent Challenge, which removes the need to run the StarCraft II game engine. This not only enables GPU acceleration, but also provides a more flexible MARL environment, unlocking the potential for self-play, meta-learning, and other future applications in MARL. | JaxMARL: Multi-Agent RL Environments in JAX | [
"Alexander Rutherford",
"Benjamin Ellis",
"Matteo Gallici",
"Jonathan Cook",
"Andrei Lupu",
"Garðar Ingvarsson",
"Timon Willi",
"Akbir Khan",
"Christian Schroeder de Witt",
"Alexandra Souly",
"Saptarashmi Bandyopadhyay",
"Mikayel Samvelyan",
"Minqi Jiang",
"Robert Tjarko Lange",
"Shimon Whiteson",
"Bruno Lacerda",
"Nick Hawes",
"Tim Rocktäschel",
"Chris Lu",
"Jakob Nicolaus Foerster"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=8wgNZ7Kado | @inproceedings{
majumder2023clin,
title={{CLIN}: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization},
author={Bodhisattwa Prasad Majumder and Bhavana Dalvi Mishra and Peter Jansen and Oyvind Tafjord and Niket Tandon and Li Zhang and Chris Callison-Burch and Peter Clark},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=8wgNZ7Kado}
} | Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup costs of reinforcement learning. However, despite their zero-shot capabilities, these agents to date do not continually improve over time, beyond performance refinement on a specific task. Here we present CLIN, the first language-based agent to achieve this, so that it continually improves over multiple trials, including when both the environment and task are varied, and without requiring parameter updates. Our approach is to use a persistent, dynamic, textual memory, centered on causal abstractions (rather than general ''helpful hints''), that is regularly updated after each trial so that the agent gradually learns useful knowledge for new trials. In the ScienceWorld benchmark, CLIN is able to continually improve on repeated trials on the same task and environment, outperforming state-of-the-art reflective language agents like Reflexion by 23 absolute points. CLIN can also transfer its learning to new environments (or new tasks), improving its zero-shot performance by 4 points (13 for new tasks) and can further improve performance there through continual memory updates, enhancing performance by an additional 17 points (7 for new tasks). This suggests a new architecture for agents built on frozen models that can still continually and rapidly improve over time. | CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization | [
"Bodhisattwa Prasad Majumder",
"Bhavana Dalvi Mishra",
"Peter Jansen",
"Oyvind Tafjord",
"Niket Tandon",
"Li Zhang",
"Chris Callison-Burch",
"Peter Clark"
] | Workshop/ALOE | 2310.10134 | [
""
] | https://huggingface.co/papers/2310.10134 | 1 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=8lvT20K65h | @inproceedings{
grillotti2023skillconditioned,
title={Skill-Conditioned Policy Optimization with Successor Features Representations},
author={Luca Grillotti and Maxence Faldor and Borja G. Le{\'o}n and Antoine Cully},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=8lvT20K65h}
} | A key aspect of intelligence is the ability to exhibit a wide range of behaviors to adapt to unforeseen situations. Designing artificial agents that are capable of showcasing a broad spectrum of skills is a long-standing challenge in Artificial Intelligence. In the last decade, progress in deep reinforcement learning has enabled to solve complex tasks with high-dimensional, continuous state and action spaces. However, most approaches return only one highly-specialized solution to a single problem. We introduce a Skill-Conditioned OPtimal Agent (SCOPA) that leverages successor features representations to learn a continuous range of skills that solve a task. We extend the generalized policy iteration framework with a policy skill improvement update based on successor features that is analogous to the classic policy improvement update. This novel skill improvement update enables to efficiently learn executing skills. From this result, we develop an algorithm that seamlessly unifies value function and successor features policy iteration with constrained optimization to (1) maximize performance, while (2) executing the desired skills. Compared with other skill-conditioned reinforcement learning methods, SCOPA reaches significantly higher performance and skill space coverage on challenging continuous control locomotion tasks with various types of skills. We also demonstrate that the diversity of skills is useful in five downstream adaptation tasks. Videos of our results are available at: https://bit.ly/scopa. | Skill-Conditioned Policy Optimization with Successor Features Representations | [
"Luca Grillotti",
"Maxence Faldor",
"Borja G. León",
"Antoine Cully"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=5cEQ4ZOsIN | @inproceedings{
patarroyo2023assemblyca,
title={Assembly{CA}: A Benchmark of Open-Endedness for Discrete Cellular Automata},
author={Keith Yuan Patarroyo and Abhishek Sharma and Sara Walker and Lee Cronin},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=5cEQ4ZOsIN}
} | We introduce AssemblyCA, a framework for utilizing cellular automata(CA) designed to benchmark the potential of open-ended processes. The benchmark quantifies the open-endedness of a system composed of resources, agents interacting with CAs, and a set of generated artifacts. We quantify the amount of open-endedness by taking the generated artifacts or objects and analyzing them using the tools of assembly theory(AT). Assembly theory can be used to identify selection in systems that produce objects that can be decomposable into atomic units, where these objects can exist in high copy numbers. By combining an assembly space measure with the copy number of an object we can quantify the complexity of objects that have a historical contingency. Moreover, this framework allows us to accurately quantify the indefinite generation of novel, diverse, and complex objects, the signature of open-endedness. We benchmark different measures from the assembly space with standard diversity and complexity measures that lack historical contingency. Finally, the open-endedness of three different systems is quantified by performing an undirected exploration in two-dimensional life-like CA, a cultural exploration provided by human experimenters, and an algorithmic exploration by a set of programmed agents. | AssemblyCA: A Benchmark of Open-Endedness for Discrete Cellular Automata | [
"Keith Yuan Patarroyo",
"Abhishek Sharma",
"Sara Walker",
"Lee Cronin"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4gSkzVAF6g | @inproceedings{
suarez2023pufferlib,
title={PufferLib: Making Reinforcement Learning Libraries and Environments Play Nice},
author={Joseph Suarez},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=4gSkzVAF6g}
} | Common simplifying assumptions often cause standard reinforcement learning (RL) methods to fail on complex, open-ended environments. Creating a new wrapper for each environment and learning library can help alleviate these limitations, but building them is labor-intensive and error-prone. This practical tooling gap restricts the applicability of RL as a whole. To address this challenge, PufferLib transforms complex environments into a broadly compatible, vectorized format that eliminates the need for bespoke conversion layers and enables rigorous cross-environment testing. PufferLib does this without deviating from standard reinforcement learning APIs, significantly reducing the technical overhead. We release PufferLib's complete source code under the MIT license, a pip module, a containerized setup, comprehensive documentation, and example integrations. We also maintain a community Discord channel to facilitate support and discussion. | PufferLib: Making Reinforcement Learning Libraries and Environments Play Nice | [
"Joseph Suarez"
] | Workshop/ALOE | 2406.12905 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=3aPLOQTcVL | @inproceedings{
yue2023tdgr,
title={t-{DGR}: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making},
author={William Yue and Bo Liu and Peter Stone},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=3aPLOQTcVL}
} | Deep generative replay has emerged as a promising approach for continual learning in decision-making tasks. This approach addresses the problem of catastrophic forgetting by leveraging the generation of trajectories from previously encountered tasks to augment the current dataset. However, existing deep generative replay methods for continual learning rely on autoregressive models, which suffer from compounding errors in the generated trajectories. In this paper, we propose a simple, scalable, and non-autoregressive method for continual learning in decision-making tasks using a diffusion model that generates task samples conditioned on the trajectory timestep. We evaluate our method on Continual World benchmarks and find that our approach achieves state-of-the-art performance on the average success rate metric compared to other continual learning methods. | t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making | [
"William Yue",
"Bo Liu",
"Peter Stone"
] | Workshop/ALOE | 2401.02576 | [
"https://github.com/williamyue37/t-dgr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=16fkkkCeOC | @inproceedings{
miconi2023procedural,
title={Procedural generation of meta-reinforcement learning tasks},
author={Thomas Miconi},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=16fkkkCeOC}
} | Open-endedness stands to benefit from the ability to generate an infinite variety of diverse, challenging environments. One particularly interesting type of challenge is meta-learning (``learning-to-learn''), a hallmark of intelligent behavior. However, the number of meta-learning environments in the literature is limited. Here we describe a parametrized space for simple meta-reinforcement learning (meta-RL) tasks with arbitrary stimuli. The parametrization allows us to randomly generate an arbitrary number of novel simple meta-learning tasks.The parametrization is expressive enough to include many well-known meta-RL tasks, such as bandit problems, the Harlow task, T-mazes, the Daw two-step task and others. Simple extensions allow it to capture tasks based on two-dimensional topological spaces, such as full mazes or find-the-spot domains. We describe a number of randomly generated meta-RL domains of varying complexity and discuss potential issues arising from random generation. | Procedural generation of meta-reinforcement learning tasks | [
"Thomas Miconi"
] | Workshop/ALOE | 2302.05583 | [
"https://github.com/thomasmiconi/meta-task-generator"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=09wy0Rtacu | @inproceedings{
wu2023curriculum,
title={Curriculum Learning from Smart Retail Investors: Towards Financial Open-endedness},
author={Kent Wu and Ziyi Xia and Shuaiyu Chen and Xiao-Yang Liu},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=09wy0Rtacu}
} | The integration of data-driven supervised learning and reinforcement learning has demonstrated promising potential for stock trading. It has been observed that introducing training examples to a learning algorithm in a meaningful order or sequence, known as curriculum learning, can speed up convergence and yield improved solutions. In this paper, we present a financial curriculum learning method that achieves superhuman performance in automated stock trading. First, with high-quality financial datasets from smart retail investors, such as trading logs, training our algorithm through imitation learning results in a reasonably competent solution. Subsequently, leveraging reinforcement learning techniques in a second stage, we develop a novel curriculum learning strategy that helps traders beat the stock market. | Curriculum Learning from Smart Retail Investors: Towards Financial Open-endedness | [
"Kent Wu",
"Ziyi Xia",
"Shuaiyu Chen",
"Xiao-Yang Liu"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wI4SXLFqVK | @inproceedings{
zhang2023semiimplicit,
title={Semi-Implicit Neural Ordinary Differential Equations for Learning Chaotic Systems},
author={Hong Zhang and Ying Liu and Romit Maulik},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=wI4SXLFqVK}
} | Classical neural ordinary differential equations (ODEs) trained by using explicit methods are intrinsically constrained by stability,
severely affecting their efficiency and robustness in learning complex spatiotemporal dynamics, particularly those displaying chaotic behavior.
In this work we propose a semi-implicit neural ODE approach that capitalizes on the partitionable structure of the underlying dynamics.
In our method the neural ODE is partitioned into a linear part treated implicitly for enhanced stability and a nonlinear part treated explicitly.
We apply this approach to learn chaotic trajectories of the Kuramoto--Sivashinsky equation.
Our results demonstrate that our approach significantly outperforms existing approaches for coarse-resolution data and remains efficient for fine-resolution data where existing techniques become intractable. | Semi-Implicit Neural Ordinary Differential Equations for Learning Chaotic Systems | [
"Hong Zhang",
"Ying Liu",
"Romit Maulik"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ur9hHy6VT7 | @inproceedings{
dupuis2023from,
title={From Mutual Information to Expected Dynamics: New Generalization Bounds for Heavy-Tailed {SGD}},
author={Benjamin Dupuis and Paul Viallard},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ur9hHy6VT7}
} | Understanding the generalization abilities of modern machine learning algorithms has been a major research topic over the past decades. In recent years, the learning dynamics of Stochastic Gradient Descent (SGD) have been related to heavy-tailed dynamics. This has been successfully applied to generalization theory by exploiting the fractal properties of those dynamics. However, the derived bounds depend on mutual information (decoupling) terms that are beyond the reach of computability. In this work, we prove generalization bounds over the trajectory of a class of heavy-tailed dynamics, without those mutual information terms. Instead, we introduce a geometric decoupling term by comparing the learning dynamics (depending on the empirical risk) with an expected one (depending on the population risk). We further upper-bound this geometric term, by using techniques from the heavy-tailed and the fractal literature, making it fully computable. Moreover, as an attempt to tighten the bounds, we propose a PAC-Bayesian setting based on perturbed dynamics, in which the same geometric term plays a crucial role and can still be bounded using the techniques described above. | From Mutual Information to Expected Dynamics: New Generalization Bounds for Heavy-Tailed SGD | [
"Benjamin Dupuis",
"Paul Viallard"
] | Workshop/HeavyTails | 2312.00427 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=tkYdr7qVlE | @inproceedings{
yaman2023instanceaware,
title={Instance-Aware Repeat Factor Sampling for Long-Tailed Object Detection},
author={Burhaneddin Yaman and Tanvir Mahmud and Chun-Hao Liu},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=tkYdr7qVlE}
} | We propose an embarrassingly simple method -- instance-aware repeat factor sampling (IRFS) to address the problem of imbalanced data in long-tailed object detection. Imbalanced datasets in real-world object detection often suffer from a large disparity in the number of instances for each class. To improve the generalization performance of object detection models on rare classes, various data sampling techniques have been proposed. Repeat factor sampling (RFS) has shown promise due to its simplicity and effectiveness. Despite its efficiency, RFS completely neglects the instance counts and solely relies on the image count during re-sampling process. However, instance count may immensely vary for different classes with similar image counts. Such variation highlights the importance of both image and instance for addressing the long-tail distributions. Thus, we propose IRFS which unifies instance and image counts for the re-sampling process to be aware of different perspectives of the imbalance in long-tailed datasets. Our method shows promising results on the challenging LVIS v1.0 benchmark dataset over various architectures and backbones, demonstrating their effectiveness in improving the performance of object detection models on rare classes with a relative $+50\%$ average precision (AP) improvement over counterpart RFS. IRFS can serve as a strong baseline and be easily incorporated into existing long-tailed frameworks. | Instance-Aware Repeat Factor Sampling for Long-Tailed Object Detection | [
"Burhaneddin Yaman",
"Tanvir Mahmud",
"Chun-Hao Liu"
] | Workshop/HeavyTails | 2305.08069 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=rWptAXSkIC | @inproceedings{
tripuraneni2023metaanalysis,
title={Meta-Analysis of Randomized Experiments with Applications to Heavy-Tailed Response Data},
author={Nilesh Tripuraneni and Dominique Perrault-Joncas and Dhruv Madeka and Dean Foster and Michael Jordan},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=rWptAXSkIC}
} | A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance. In this paper, we propose a novel cross-validation-like methodology to address this challenge. The key insight of our procedure is that the noisy (but unbiased) difference-of-means estimate can be used as a ground truth ``label" on a portion of the RCT, to test the performance of an estimator trained on the other portion. We combine this insight with an aggregation scheme, which borrows statistical strength across a large collection of RCTs, to present an end-to-end methodology for judging an estimator's ability to recover the underlying treatment effect. We evaluate our methodology across 699 RCTs implemented in the Amazon supply chain. In this heavy-tailed setting, our methodology suggests that procedures that aggressively downweight or truncate large values, while introducing bias, lower the variance enough to ensure that the treatment effect is more accurately estimated. | Meta-Analysis of Randomized Experiments with Applications to Heavy-Tailed Response Data | [
"Nilesh Tripuraneni",
"Dominique Perrault-Joncas",
"Dhruv Madeka",
"Dean Foster",
"Michael Jordan"
] | Workshop/HeavyTails | 2112.07602 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=l4GYs60kre | @inproceedings{
buchanan2023the,
title={The Effects of Ensembling on Long-Tailed Data},
author={E. Kelly Buchanan and Geoff Pleiss and John Patrick Cunningham},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=l4GYs60kre}
} | Deep ensembles are a popular approach to improve over single model performance (Lakshminarayanan et al. 2017), either by averaging logits (Hinton et al. 2015, Webb et al. 2020, Gontijo-Lopes et al. 2022), or probabilities of multiple models (Dietterich 2000, Lakshminarayanan et al. 2017, Kumar et al. 2022). Recent theoretical work has shown that logit and probability ensembles have different benefits (Gupta et al. 2022, Wood et al. 2023), but to our knowledge these ensembling approaches have not been compared systematically for balanced vs imbalanced data. In this work, we show that for balanced datasets, there is no significant difference between logit and probability ensembles in terms of accuracy and ranked calibration. However, we show that in long tailed datasets, there are gains from logit ensembling when combined with imbalance bias reduction losses. In turn, our results suggest that there are benefits to be gained from loss-aware ensembles when dealing with long-tail data. | The Effects of Ensembling on Long-Tailed Data | [
"E. Kelly Buchanan",
"Geoff Pleiss",
"John Patrick Cunningham"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=iJkskCUl2f | @inproceedings{
adomaityte2023highdimensional,
title={High-dimensional robust regression under heavy-tailed data: Asymptotics and Universality},
author={Urte Adomaityte and Leonardo Defilippis and Bruno Loureiro and Gabriele Sicuro},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=iJkskCUl2f}
} | We investigate the high-dimensional properties of robust regression estimators in the presence of heavy-tailed contamination of both the covariates and response functions. In particular, we provide a sharp asymptotic characterisation of M-estimators trained on a family of elliptical covariate and noise data distributions including cases where second and higher moments do not exist. We show that, despite being consistent, the Huber loss with optimally tuned location parameter $\delta$ is suboptimal in the high-dimensional regime in the presence of heavy-tailed noise, necessitating regularisation for optimal performance. This result also uncovers the existence of a curious transition in $\delta$ as a function of the sample complexity and contamination. Moreover, we derive the decay rates for the excess risk of ridge regression. We show that, while it is optimal and universal for noise distributions with finite second moment, its decay rate can be considerably faster when the covariates' second moment does not exist. Finally, we show that our formulas readily generalise to a richer family of models and data distributions, such as generalised linear estimation with arbitrary convex regularisation trained on mixture models. | High-dimensional robust regression under heavy-tailed data: Asymptotics and Universality | [
"Urte Adomaityte",
"Leonardo Defilippis",
"Bruno Loureiro",
"Gabriele Sicuro"
] | Workshop/HeavyTails | 2309.16476 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=hzlsBtfKZ1 | @inproceedings{
rhee2023large,
title={Large Deviations and Metastability Analysis for Heavy-Tailed Dynamical Systems},
author={Chang-Han Rhee and Xingyu Wang},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hzlsBtfKZ1}
} | We study large deviations and metastability of heavy-tailed stochastic dynamical systems and provide the heavy-tailed counterparts of the classical Freidlin-Wentzell and Eyring-Kramers theory. Our findings address the rare-event analysis for sufficiently general events and heavy-tailed dynamical systems. We also unveil an intricate phase transitions in the first exit problems under truncated heavytailed noises. Furthermore, our results provide tools to systematically study the connection between the global dynamics of the stochastic gradient descent (SGD) under heavy-tailed noises and the generalization mystery of deep learning. | Large Deviations and Metastability Analysis for Heavy-Tailed Dynamical Systems | [
"Chang-Han Rhee",
"Xingyu Wang"
] | Workshop/HeavyTails | 2307.03479 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=hyPoaUJwXI | @inproceedings{
cabannes2023associative,
title={Associative Memories with Heavy-Tailed Data},
author={Vivien Cabannes and Elvis Dohmatob and Alberto Bietti},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hyPoaUJwXI}
} | Learning arguably involves the discovery and memorization of abstract rules.
But how associative memories appear in transformer architectures optimized with gradient descent algorithms?
We derive precise scaling laws for a simple input-output associative memory model with respect to parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms.
We provide extensive numerical experiments to validate and interpret theoretical results, including fine-grained visualizations of the stored memory associations. | Associative Memories with Heavy-Tailed Data | [
"Vivien Cabannes",
"Elvis Dohmatob",
"Alberto Bietti"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=hidLUSfu0D | @inproceedings{
rosenfeld2023outliers,
title={Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization},
author={Elan Rosenfeld and Andrej Risteski},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hidLUSfu0D}
} | We identify a new phenomenon in network optimization which arises from the interaction of depth and a particular heavy-tailed structure in natural data. Our result offers intuitive explanations for several previously reported observations about network training dynamics. In particular, it implies a conceptually new cause for progressive sharpening and the edge of stability; we also highlight connections to other concepts in optimization and generalization including grokking, simplicity bias, and Sharpness-Aware Minimization.
Experimentally, we demonstrate the significant influence of paired groups of outliers in the training data with strong *opposing signals*: consistent, large magnitude features which dominate the network output throughout training and provide gradients which point in opposite directions. We describe how to identify these groups, explore what sets them apart, and carefully study their effect on the network's optimization and behavior. We complement these experiments with a mechanistic explanation on a toy example of opposing signals and a theoretical analysis of a two-layer linear network on a simple model. Our finding enables new qualitative predictions of training behavior which we confirm experimentally. It also provides a new lens through which to study and improve modern training practices for stochastic optimization, which we highlight via a case study of Adam versus SGD. | Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization | [
"Elan Rosenfeld",
"Andrej Risteski"
] | Workshop/HeavyTails | 2311.04163 | [
""
] | https://huggingface.co/papers/2311.04163 | 1 | 1 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=envolCEjJP | @inproceedings{
lee2023deep,
title={Deep neural networks with dependent weights: {\textbackslash}{\textbackslash}Gaussian Process mixture limit, heavy tails, sparsity and compressibility},
author={Hoil Lee and Fadhel Ayed and Paul Jung and Juho Lee and Hongseok Yang and Francois Caron},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=envolCEjJP}
} | This work studies the infinite-width limit of deep feedforward neural networks whose weights are dependent, and modelled via a mixture of Gaussian distributions. Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L\'evy measure on the positive reals. If the scalar parameters are strictly positive and the L\'evy measures are trivial at all hidden layers, then one recovers the classical Gaussian process (GP) limit, obtained with iid Gaussian weights. More interestingly, if the L\'evy measure of at least one layer is non-trivial, we obtain a mixture of Gaussian processes (MoGP) in the large-width limit. The behaviour of the neural network in this regime is very different from the GP regime. One obtains correlated outputs, with non-Gaussian distributions, possibly with heavy tails. We illustrate some of the benefits of the MoGP regime over the GP regime in terms of representation learning and compressibility on simulated, MNIST and Fashion MNIST datasets. | Deep neural networks with dependent weights:
Gaussian Process mixture limit, heavy tails, sparsity and compressibility | [
"Hoil Lee",
"Fadhel Ayed",
"Paul Jung",
"Juho Lee",
"Hongseok Yang",
"Francois Caron"
] | Workshop/HeavyTails | [
"https://github.com/fadhela/mogp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dHGNgkUcGd | @inproceedings{
cohen2023adaptive,
title={Adaptive Gradient Methods at the Edge of Stability},
author={Jeremy Cohen and Behrooz Ghorbani and Shankar Krishnan and Naman Agarwal and Sourabh Medapati and Michal Badura and Daniel Suo and Zachary Nado and George E. Dahl and Justin Gilmer},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=dHGNgkUcGd}
} | Very little is known about the training dynamics of adaptive gradient methods like Adam in deep learning. In this paper, we shed light on the behavior of these algorithms in the full-batch and sufficiently large batch settings. Specifically, we empirically demonstrate that during full-batch training, the maximum eigenvalue of the \emph{preconditioned} Hessian typically equilibrates at a certain numerical value --- the stability threshold of a gradient descent algorithm. For Adam with step size $\eta$ and $\beta_1 = 0.9$, this stability threshold is $38/\eta$. Similar effects occur during minibatch training, especially as the batch size grows. Yet, even though adaptive methods train at the “Adaptive Edge of Stability” (AEoS), their behavior in this regime differs in a significant way from that of non-adaptive methods at the EoS. Whereas non-adaptive algorithms at the EoS are blocked from entering high-curvature regions of the loss landscape, adaptive gradient methods at the AEoS keep advancing into high-curvature regions, while adapting the preconditioner to compensate. Our findings can serve as a foundation for the community’s future understanding of adaptive gradient methods in deep learning. | Adaptive Gradient Methods at the Edge of Stability | [
"Jeremy Cohen",
"Behrooz Ghorbani",
"Shankar Krishnan",
"Naman Agarwal",
"Sourabh Medapati",
"Michal Badura",
"Daniel Suo",
"Zachary Nado",
"George E. Dahl",
"Justin Gilmer"
] | Workshop/HeavyTails | 2207.14484 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=XUeGYkOGF0 | @inproceedings{
sha2023online,
title={Online Student-\$t\$ Processes with an Overall-local Scale Structure for Modelling Non-stationary Data},
author={Taole Sha and Michael Zhang},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=XUeGYkOGF0}
} | Time-dependent data often exhibit characteristics, such as non-stationarity and heavy-tailed errors, that would be inappropriate to model with the typical assumptions used in popular models. Thus, more flexible approaches are required to be able to accommodate such issues. To this end, we propose a Bayesian mixture of student-$t$ processes with an overall-local scale structure for the covariance. Moreover, we use a sequential Monte Carlo (SMC) sampler in order to perform online inference as data arrive in real-time. We demonstrate the superiority of our proposed approach compared to typical Gaussian process-based models on real-world data sets in order to prove the necessity of using mixtures of student-$t$ processes. | Online Student-t Processes with an Overall-local Scale Structure for Modelling Non-stationary Data | [
"Taole Sha",
"Michael Zhang"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=UaCrxyUeyE | @inproceedings{
gamba2023on,
title={On the Varied Faces of Overparameterization in Supervised and Self-Supervised Learning},
author={Matteo Gamba and Arna Ghosh and Kumar Krishna Agrawal and Blake Aaron Richards and Hossein Azizpour and M{\r{a}}rten Bj{\"o}rkman},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=UaCrxyUeyE}
} | The quality of the representations learned by neural networks depends on several factors, including the loss function, learning algorithm, and model architecture. In this work, we use information geometric measures to assess the representation quality in a principled manner. We demonstrate that the sensitivity of learned representations to input perturbations, measured by the spectral norm of the feature Jacobian, provides valuable information about downstream generalization. On the other hand, measuring the coefficient of spectral decay observed in the eigenspectrum of feature covariance provides insights into the global representation geometry. First, we empirically establish an equivalence between these notions of representation quality and show that they are inversely correlated. Second, our analysis reveals the varying roles that overparameterization plays in improving generalization.
Unlike supervised learning, we observe that increasing model width leads to higher discriminability and less smoothness in the self-supervised regime.
Furthermore, we report that there is no observable double descent phenomenon in SSL with non-contrastive objectives for commonly used parameterization regimes, which opens up new opportunities for tight asymptotic analysis. Taken together, our results provide a loss-aware characterization of the different role of overparameterization in supervised and self-supervised learning. | On the Varied Faces of Overparameterization in Supervised and Self-Supervised Learning | [
"Matteo Gamba",
"Arna Ghosh",
"Kumar Krishna Agrawal",
"Blake Aaron Richards",
"Hossein Azizpour",
"Mårten Björkman"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=TLO3pOHCyx | @inproceedings{
wan2023neural,
title={Neural network compression with heavy-tailed {SGD}},
author={Yijun Wan and Abdellatif Zaidi and Umut Simsekli},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=TLO3pOHCyx}
} | Neural network compression has been an increasingly important subject, due to its practical implications in terms of reducing the computational requirements and its theoretical implications, as there is an explicit connection between compressibility and the generalization error. Recent studies have shown that the choice of the hyperparameters of stochastic gradient descent (SGD) can have an effect on the compressibility of the learned parameter vector. Even though these results have shed some light on the role of the training dynamics over compressibility, they relied on unverifiable assumptions and the resulting theory does not provide a practical guideline due to its implicitness. In this study, we propose a simple modification for SGD, such that the outputs of the algorithm will be provably compressible without making any nontrivial assumptions. We consider a one-hidden-layer neural network trained with SGD and we inject additive heavy-tailed noise to the iterates at each iteration. We then show that, for any compression rate, there exists a level of overparametrization (i.e., the number of hidden units), such that the output of the algorithm will be compressible with high probability. We illustrate our approach on experiments, where the results suggest that the proposed approach achieves compressibility with a slight compromise from the training and test error. | Neural network compression with heavy-tailed SGD | [
"Yijun Wan",
"Abdellatif Zaidi",
"Umut Simsekli"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RNGvkRELRI | @inproceedings{
hasan2023representation,
title={Representation Learning for Extremes},
author={Ali Hasan and Yuting Ng and Jose Blanchet and Vahid Tarokh},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=RNGvkRELRI}
} | Extreme events are potentially catastrophic events that occur infrequently within an observation time frame, and it is necessary to understand the distribution of these events to properly plan for them. Extreme value theory provides a theoretical framework for extrapolating to the tails of a distribution using limited observations. However, for high-dimensional data such as images, covariates are generally not extreme but perhaps the features are extreme. In this work, we propose a framework for learning representations according to properties of extreme value theory. Specifically, we use the max-stability property of extreme value distributions to inform the representations of the model such that they extrapolate to the rare data observations. We theoretically characterize the properties of the model and provide an identifiability result for the parameters of the latent distribution. Our preliminary results suggest the promise of the method for extrapolating to regions of the distribution with little density. | Representation Learning for Extremes | [
"Ali Hasan",
"Yuting Ng",
"Jose Blanchet",
"Vahid Tarokh"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PQn3PoSsPb | @inproceedings{
yu2023high,
title={High Probability Guarantees for Random Reshuffling},
author={Hengxu Yu and Xiao Li},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=PQn3PoSsPb}
} | We study the probabilistic behaviors of stochastic gradient descent with random reshuffling (RR) on nonconvex problems. We prove that the same complexity (except for a logrithmic term) as that of in expectation case also holds with high probability, which characterizes the performance of RR for a single run instead of averaging infinitely many realizations. Our analysis does not impose any additional assumptions on the stochastic gradient errors, which admits heavy tails. This is in contrast to high probabiltiy analyses of SGD that rely on sub-Gaussian stochastic gradient errors or tricks like clipping, momentum, etc. Furthermore, leveraging the established high probability error bounds, we propose a simple stopping criterion for RR that introduces few computational costs. We prove that the function value strictly decreases with a high probability before the stopping criterion is triggered, ensuring that the criterion will indeed be activated. Finally, a "last iterate'' result is built for the iteration returned with this stopping criterion. We believe that our new developments for RR serve as a stepping stone towards enabling more refined high probability analyses for characterizing its performance. | High Probability Guarantees for Random Reshuffling | [
"Hengxu Yu",
"Xiao Li"
] | Workshop/HeavyTails | 2311.11841 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=I00Z75alN6 | @inproceedings{
genalti2023towards,
title={Towards Fully Adaptive Regret Minimization in Heavy-Tailed Bandits},
author={Gianmarco Genalti and Lupo Marsigli and Nicola Gatti and Alberto Maria Metelli},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=I00Z75alN6}
} | Heavy-tailed distributions naturally arise in many settings, from finance to telecommunications.
While regret minimization under sub-Gaussian or bounded support rewards has been widely studied, learning on heavy-tailed distributions only gained popularity over the last decade.
In the stochastic heavy-tailed bandit problem, an agent learns under the assumption that the distributions have finite moments of maximum order $1+\epsilon$ which are uniformly bounded by a constant $u$, for some $\epsilon \in (0,1]$. To the best of our knowledge, literature only provides algorithms requiring these two quantities as an input.
In this paper we study the stochastic adaptive heavy-tailed bandit, a variation of the standard setting where both $\epsilon$ and $u$ are unknown to the agent.
We show that adaptivity comes at cost, introducing two lower bounds on the regret of any adaptive algorithm implying an higher regret w.r.t. the standard setting.
Finally, we introduce a specific distributional assumption and provide Adaptive Robust UCB, a regret minimization strategy matching the known lower bound for the heavy-tailed MAB problem. | Towards Fully Adaptive Regret Minimization in Heavy-Tailed Bandits | [
"Gianmarco Genalti",
"Lupo Marsigli",
"Nicola Gatti",
"Alberto Maria Metelli"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=C6PiH9Fkjd | @inproceedings{
schaipp2023robust,
title={Robust gradient estimation in the presence of heavy-tailed noise},
author={Fabian Schaipp and Umut Simsekli and Robert M. Gower},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=C6PiH9Fkjd}
} | In applications such as training transformers on NLP tasks, or distributed learning in the presence of corrupted nodes, the stochastic gradients have a heavy-tailed distribution. We argue that in these settings, momentum is not the best suited method for estimating the gradient. Instead, variants of momentum with different forms of clipping are better suited. Our argument is based on the following: in the presence of heavy tailed noise the sample median of the gradient is a better estimate than the sample mean. We then devise new iterative methods for computing the sample median on the fly based on the SPP (stochastic proximal point) method. These SPP methods applied to different definitions of median give rise to known and new type of clipped momentum estimates. We find that these clipped momentum estimates are more robust at estimating the gradient in the presence of noise coming from an alpha-stable distribution, and for a transformer architecture on the PTB and Wikitext-2 datasets, in particular when the batch size is large. | Robust gradient estimation in the presence of heavy-tailed noise | [
"Fabian Schaipp",
"Umut Simsekli",
"Robert M. Gower"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=9xNDQJER95 | @inproceedings{
kindap2023generalised,
title={Generalised Hyperbolic State-space Models for Inference in Dynamic Systems},
author={Yaman Kindap and Simon J. Godsill},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=9xNDQJER95}
} | In this work we study linear vector stochastic differential equation (SDE) models driven by the generalised hyperbolic (GH) L{\'e}vy process for inference in continuous-time non-Gaussian filtering problems. The GH family of stochastic processes offers a flexible framework for modelling of non-Gaussian, heavy-tailed characteristics and includes the normal inverse-Gaussian, variance-gamma and Student-t processes as special cases. We present continuous-time simulation methods for the solution of vector SDE models driven by GH processes and novel inference methodologies using a variant of sequential Markov chain Monte Carlo (MCMC). As an example a particular formulation of Langevin dynamics is studied within this framework. The model is applied to both a synthetically generated data set and a real-world financial series to demonstrate its capabilities. | Generalised Hyperbolic State-space Models for Inference in Dynamic Systems | [
"Yaman Kindap",
"Simon J. Godsill"
] | Workshop/HeavyTails | 2309.11422 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=8oe7yIYRi1 | @inproceedings{
daems2023variational,
title={Variational Inference for {SDE}s Driven by Fractional Noise},
author={Rembert Daems and Manfred Opper and Guillaume Crevecoeur and Tolga Birdal},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=8oe7yIYRi1}
} | We present a novel variational framework for performing inference in (neural) stochastic differential equations (SDEs) driven by Markov-approximate fractional Brownian motion (fBM). SDEs offer a versatile tool for modeling real-world continuous-time dynamic systems with inherent noise and randomness. Combining SDEs with the powerful inference capabilities of variational methods, enables the learning of representative function distributions through stochastic gradient descent. However, conventional SDEs typically assume the underlying noise to follow a Brownian motion (BM), which hinders their ability to capture long-term dependencies. In contrast, fractional Brownian motion (fBM) extends BM to encompass non-Markovian dynamics, but existing methods for inferring fBM parameters are either computationally demanding or statistically inefficient.
In this paper, building upon the Markov approximation of fBM, we derive the evidence lower bound essential for efficient variational inference of posterior path measures, drawing from the well-established field of stochastic analysis. Additionally, we provide a closed-form expression to determine optimal approximation coefficients. Furthermore, we propose the use of neural networks to learn the drift, diffusion and control terms within our variational posterior, leading to the variational training of neural-SDEs. In this framework, we also optimize the Hurst index, governing the nature of our fractional noise. Beyond validation on synthetic data, we contribute a novel architecture for variational latent video prediction,—an approach that, to the best of our knowledge, enables the first variational neural-SDE application to video perception. | Variational Inference for SDEs Driven by Fractional Noise | [
"Rembert Daems",
"Manfred Opper",
"Guillaume Crevecoeur",
"Tolga Birdal"
] | Workshop/HeavyTails | 2310.12975 | [
""
] | https://huggingface.co/papers/2310.12975 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=01eUekTYvE | @inproceedings{
battash2023revisiting,
title={Revisiting the noise Model of {SGD}},
author={Barak Battash and Lior Wolf and Ofir Lindenbaum},
booktitle={NeurIPS 2023 Workshop Heavy Tails in Machine Learning},
year={2023},
url={https://openreview.net/forum?id=01eUekTYvE}
} | The effectiveness of stochastic gradient descent (SGD) is significantly influenced by stochastic gradient noise (SGN). Following the central limit theorem, stochastic gradient noise (SGN) was initially described as Gaussian, but recently, Simsekli et al. demonstrated that SαS Lévy better characterizes the stochastic gradient noise. Here, we revisit the noise model of SGD and provide robust, comprehensive empirical evidence that SGN is heavy-tailed and is better represented by the SαS distribution. Furthermore, we argue that different deep neural network (DNN) parameters preserve distinct SGN properties throughout training. We develop a novel framework based on Lévy-driven stochastic differential equation (SDE), where one-dimensional Lévy processes describe each DNN parameter. This leads to a more accurate characterization of the dynamics of SGD around local minima. | Revisiting the noise Model of SGD | [
"Barak Battash",
"Lior Wolf",
"Ofir Lindenbaum"
] | Workshop/HeavyTails | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=zqcEQVzvZx | @inproceedings{
barnes2023massively,
title={Massively Scalable Inverse Reinforcement Learning in Google Maps},
author={Matt Barnes and Matthew Abueg and Oliver F. Lange and Matt Deeds and Jason Trader and Denali Molitor and Markus Wulfmeier and Shawn O'Banion},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=zqcEQVzvZx}
} | Inverse reinforcement learning (IRL) offers a powerful and general framework for learning humans' latent preferences in route recommendation, yet no approach has successfully addressed planetary-scale problems with hundreds of millions of states and demonstration trajectories. In this paper, we introduce scaling techniques based on graph compression, spatial parallelization, and improved initialization conditions inspired by a connection to eigenvector algorithms. We revisit classic IRL methods in the routing context, and make the key observation that there exists a trade-off between the use of cheap, deterministic planners and expensive yet robust stochastic policies. This insight is leveraged in Receding Horizon Inverse Planning (RHIP), a new generalization of classic IRL algorithms that provides fine-grained control over performance trade-offs via its planning horizon. Our contributions culminate in a policy that achieves a 16-24% improvement in route quality at a global scale, and to the best of our knowledge, represents the largest published benchmark of IRL algorithms in a real-world setting to date. We conclude by conducting an ablation study of key components, presenting negative results from alternative eigenvalue solvers, and identifying opportunities to further improve scalability via IRL-specific batching strategies. | Massively Scalable Inverse Reinforcement Learning in Google Maps | [
"Matt Barnes",
"Matthew Abueg",
"Oliver F. Lange",
"Matt Deeds",
"Jason Trader",
"Denali Molitor",
"Markus Wulfmeier",
"Shawn O'Banion"
] | Workshop/GenPlan | 2305.11290 | [
""
] | https://huggingface.co/papers/2305.11290 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=zQbff3h0da | @inproceedings{
hao2023reasoning,
title={Reasoning with Language Model is Planning with World Model},
author={Shibo Hao and Yi Gu and Haodi Ma and Joshua Hong and Zhen Wang and Daisy Zhe Wang and Zhiting Hu},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=zQbff3h0da}
} | Large language models (LLMs) have shown remarkable reasoning capabilities, particularly with chain-of-thought (CoT) prompting. However, LLMs can still struggle with problems that are easy for humans, such as generating action plans for executing tasks in a given environment, or performing complex math or logical reasoning. The deficiency stems from the key fact that LLMs lack an internal *world model* to predict the world *state* (e.g., environment status, intermediate variable values) and simulate long-term outcomes of actions. This prevents LLMs from performing deliberate planning akin to human brains, which involves exploring alternative reasoning paths, anticipating future states and rewards, and iteratively refining existing reasoning steps. To overcome the limitations, we propose a new LLM reasoning framework, Reasoning via Planning (RAP). RAP repurposes the LLM as both a world model and a reasoning agent, and incorporates a principled planning algorithm (based on Monte Carlo Tree Search) for strategic exploration in the vast reasoning space. During reasoning, the LLM (as agent) incrementally builds a reasoning tree under the guidance of the LLM (as world model) and rewards, and efficiently obtains a high-reward reasoning path with a proper balance between exploration v.s. exploitation. We apply RAP to a variety of challenging reasoning problems including plan generation, math reasoning, and logical inference. Empirical results on these tasks demonstrate the superiority of RAP over various strong baselines, including CoT and least-to-most prompting with self-consistency. RAP on LLaMA-33B surpasses CoT on GPT-4 with 33\% relative improvement in a plan generation setting. | Reasoning with Language Model is Planning with World Model | [
"Shibo Hao",
"Yi Gu",
"Haodi Ma",
"Joshua Hong",
"Zhen Wang",
"Daisy Zhe Wang",
"Zhiting Hu"
] | Workshop/GenPlan | 2305.14992 | [
"https://github.com/ber666/llm-reasoners"
] | https://huggingface.co/papers/2305.14992 | 2 | 3 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=z9GfvNQ7qJ | @inproceedings{
derman2023robustness,
title={Robustness and Regularization in Reinforcement Learning},
author={Esther Derman and Yevgeniy Men and Matthieu Geist and Shie Mannor},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=z9GfvNQ7qJ}
} | Robust Markov decision processes (MDPs) tackle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization, which can significantly increase computational complexity and limit scalability. On the other hand, policy regularization improves learning stability without impairing time complexity. Yet, it does not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that policy regularization methods solve a particular instance of robust MDPs with uncertain rewards. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then introduce twice regularized MDPs ($\text{R}^2$ MDPs), i.e., MDPs with value *and* policy regularization. The corresponding Bellman operators lead to planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, and illustrate the persistent efficacy of \rr regularization. | Robustness and Regularization in Reinforcement Learning | [
"Esther Derman",
"Yevgeniy Men",
"Matthieu Geist",
"Shie Mannor"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=z5Ai1ljmlH | @inproceedings{
gu2023learning,
title={Learning Generalizable Visual Task Through Interaction},
author={Weiwei Gu and Anant Sah and Nakul Gopalan},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=z5Ai1ljmlH}
} | We present a framework for robots to learn novel visual concepts and visual tasks via in-situ linguistic interactions with human users. Previous approaches in computer vision have either used large pre-trained visual models to infer novel objects zero-shot, or added novel concepts along with their attributes and representations to a concept hierarchy. We extend the approaches that focus on learning visual concept hierarchies and take this ability one step further to demonstrate novel task solving on robots along with the learned visual concepts. To enable a visual concept learner to solve robotics tasks one-shot, we developed two distinct techniques.
Firstly, we propose a novel approach, Hi-Viscont(HIerarchical VISual CONcept learner for Task), which augments information of a novel concept, that is being taught, to its parent nodes within a concept hierarchy.
This information propagation allows all concepts in a hierarchy to update as novel concepts are taught in a continual learning setting.
Secondly, we represent a visual task as a scene graph with language annotations, allowing us to create novel permutations of a demonstrated task zero-shot in-situ.
We compared Hi-Viscont with the baseline model (FALCON~\cite{mei2022falcon}) on visual question answering(VQA) in three domains.
While being comparable to the baseline model on leaf level concepts, Hi-Viscont achieves an improvement of over 9% on non-leaf concepts on average.
Additionally, we provide a demonstration where a human user teaches the robot visual tasks and concepts interactively.
With these results we demonstrate the ability of our model to learn tasks and concepts in a continual learning setting on the robot. | Learning Generalizable Visual Task Through Interaction | [
"Weiwei Gu",
"Anant Sah",
"Nakul Gopalan"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yJYVcItE3K | @inproceedings{
huang2023nonadaptive,
title={Non-adaptive Online Finetuning for Offline Reinforcement Learning},
author={Audrey Huang and Mohammad Ghavamzadeh and Nan Jiang and Marek Petrik},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=yJYVcItE3K}
} | Offline reinforcement learning (RL) has emerged as an important framework for applying RL to real-life applications. However, the complete lack of online interactions causes technical difficulties, and the _online finetuning_ setting incorporates a limited form of online interactions---which is often available in practice---to address these challenges. Unfortunately, current theoretical frameworks for online finetuning either assume high online sample complexity and/or require deploying fully adaptive algorithms (i.e., unlimited policy changes), which restricts their application to real-world settings where online interactions and policy updates are expensive and limited. In this paper, we develop a new framework for online finetuning. Instead of competing with the optimal policy (which inherits the high sample complexity and adaptivity requirements of online RL), we aim to learn a new policy that improves as much as possible over the existing policy using a _pre-specified_ number of online samples and with a _non-adaptive_ data-collection policy. Our formulation reveals surprising nuances and suggests novel principles that distinguishes the finetuning problem from purely online and offline RL. | Non-adaptive Online Finetuning for Offline Reinforcement Learning | [
"Audrey Huang",
"Mohammad Ghavamzadeh",
"Nan Jiang",
"Marek Petrik"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tz5Qga2tNJ | @inproceedings{
yang2023learning,
title={Learning Interactive Real-World Simulators},
author={Sherry Yang and Yilun Du and Seyed Kamyar Seyed Ghasemipour and Jonathan Tompson and Dale Schuurmans and Pieter Abbeel},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=tz5Qga2tNJ}
} | Generative models trained on internet data have revolutionized how text, image, and video content can be created. Perhaps the next milestone for generative models is to simulate realistic experience in response to actions taken by humans, robots, and other interactive agents. Applications of a real-world simulator range from controllable content creation in games and movies, to training embodied agents purely in simulation that can be directly deployed in the real world. We explore the possibility of learning a universal simulator (UniSim) of real-world interaction through generative modeling. We first make the important observation that natural datasets available for learning a real-world simulator are often rich along different axes (e.g., abundant objects in image data, densely sampled actions in robotics data, and diverse movements in navigation data). With careful orchestration of diverse datasets, each providing a different aspect of the overall experience, UniSim can emulate how humans and agents interact with the world by simulating the visual outcome of both high-level instructions such as “open the drawer” and low-level controls such as “move by x,y” from otherwise static scenes and objects. There are numerous use cases for such a real-world simulator. As an example, we use UniSim to train both high-level vision-language planners and low-level reinforcement learning policies, each of which exhibit zero-shot real-world transfer after training purely in a learned real-world simulator. We also show that other types of intelligence such as video captioning models can benefit from training with simulated experience in UniSim, opening up even wider applications. | Learning Interactive Real-World Simulators | [
"Sherry Yang",
"Yilun Du",
"Seyed Kamyar Seyed Ghasemipour",
"Jonathan Tompson",
"Dale Schuurmans",
"Pieter Abbeel"
] | Workshop/GenPlan | 2310.06114 | [
""
] | https://huggingface.co/papers/2310.06114 | 1 | 1 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=tKTTvWOzsZ | @inproceedings{
wu2023agentcentric,
title={Agent-Centric State Discovery for Finite-Memory {POMDP}s},
author={Lili Wu and Ben Evans and Riashat Islam and Raihan Seraj and Yonathan Efroni and Alex Lamb},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=tKTTvWOzsZ}
} | Discovering an informative, or agent-centric, state representation that encodes only the relevant information while discarding the irrelevant is a key challenge towards scaling reinforcement learning algorithms and efficiently applying them to downstream tasks. Prior works studied this problem in high-dimensional Markovian environments, when the current observation may be a complex object but is sufficient to decode the informative state. In this work, we consider the problem of discovering the agent-centric state in the more challenging high-dimensional non-Markovian setting, when the state can be decoded from a sequence of past observations. We establish that generalized inverse models can be adapted for learning agent-centric state representation for this task. Our results include asymptotic theory as well as negative results for alternative intuitive algorithms, such as encoding with only a forward-running sequence model. We complement these findings with a thorough empirical study on the agent-centric state discovery abilities of the different alternatives we put forward. Particularly notable is our analysis of past actions, where we show that these can be a double-edged sword: making the algorithms more successful when used correctly and causing dramatic failure when used incorrectly. | Agent-Centric State Discovery for Finite-Memory POMDPs | [
"Lili Wu",
"Ben Evans",
"Riashat Islam",
"Raihan Seraj",
"Yonathan Efroni",
"Alex Lamb"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=rVAUd3Tf56 | @inproceedings{
fan2023simple,
title={Simple Data Sharing for Multi-Tasked Goal-Oriented Problems},
author={Ying Fan and Jingling Li and Adith Swaminathan and Aditya Modi and Ching-An Cheng},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=rVAUd3Tf56}
} | Many important sequential decision problems -- from robotics, games to logistics -- are multi-tasked and goal-oriented. In this work, we frame them as Contextual Goal Oriented (CGO) problems, a goal-reaching special case of the contextual Markov decision process. CGO is a framework for designing multi-task agents that can follow instructions (represented by contexts) to solve goal-oriented tasks. We show that CGO problem can be systematically tackled using datasets that are commonly obtainable: an unsupervised interaction dataset of transitions and a supervised dataset of context-goal pairs. Leveraging the goal-oriented structure of CGO, we propose a simple data sharing technique that can provably solve CGO problems offline under natural assumptions on the datasets' quality. While an offline CGO problem is a special case of offline reinforcement learning (RL) with unlabelled data, running a generic offline RL algorithm here can be overly conservative since the goal-oriented structure of CGO is ignored. In contrast, our approach carefully constructs an augmented Markov Decision Process (MDP) to avoid introducing unnecessary pessimistic bias. In the experiments, we demonstrate our algorithm can learn near-optimal context-conditioned policies in simulated CGO problems, outperforming offline RL baselines. | Simple Data Sharing for Multi-Tasked Goal-Oriented Problems | [
"Ying Fan",
"Jingling Li",
"Adith Swaminathan",
"Aditya Modi",
"Ching-An Cheng"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=pi0h7Q39h9 | @inproceedings{
watahiki2023leveraging,
title={Leveraging Behavioral Cloning for Representation Alignment in Cross-Domain Policy Transfer},
author={Hayato Watahiki and Ryo Iwase and Ryosuke Unno and Yoshimasa Tsuruoka},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=pi0h7Q39h9}
} | The limited transferability of learned policies is a major challenge that restricts the applicability of learning-based solutions in decision-making tasks. In this paper, we present a simple method for aligning latent state representations across different domains using unaligned trajectories of proxy tasks. Once the alignment process is completed, policies trained on the shared representation can be transferred to another domain without further interaction. Our key finding is that multi-domain behavioral cloning is a powerful means of shaping a shared latent space. We also observe that the commonly used domain discriminative objective for distribution matching can be overly restrictive, potentially disrupting the latent state structure of each domain. As an alternative, we propose to use maximum mean discrepancy for regularization. Since our method focuses on capturing shared structures, it does not require discovering the exact cross-domain correspondence that existing methods aim for. Furthermore, our approach involves training only a single multi-domain policy, making it easy to extend. We evaluate our method across various domain shifts, including cross-robot and cross-viewpoint settings, and demonstrate that our approach outperforms existing methods that employ adversarial domain translation. We also conduct ablation studies to investigate the effectiveness of each loss component for different domain shifts. | Leveraging Behavioral Cloning for Representation Alignment in Cross-Domain Policy Transfer | [
"Hayato Watahiki",
"Ryo Iwase",
"Ryosuke Unno",
"Yoshimasa Tsuruoka"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ozqaF9YBce | @inproceedings{
bhatia2023rl,
title={{RL}\${\textasciicircum}3\$: Boosting Meta Reinforcement Learning via {RL} inside {RL}\${\textasciicircum}2\$},
author={Abhinav Bhatia and Samer Nashed and Shlomo Zilberstein},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=ozqaF9YBce}
} | Meta reinforcement learning (meta-RL) methods such as RL$^2$ have emerged as promising approaches for learning data-efficient RL algorithms tailored to a given task distribution. However, these RL algorithms struggle with long-horizon tasks and out-of-distribution tasks since they rely on recurrent neural networks to process the sequence of experiences instead of summarizing them into general RL components such as value functions. Moreover, even transformers have a practical limit to the length of histories they can efficiently reason about before training and inference costs become prohibitive. In contrast, traditional RL algorithms are data-inefficient since they do not leverage domain knowledge, but they do converge to an optimal policy as more data becomes available. In this paper, we propose RL$^3$, a principled hybrid approach that combines traditional RL and meta-RL by incorporating task-specific action-values learned through traditional RL as an input to the meta-RL neural network. We show that RL$^3$ earns greater cumulative reward on long-horizon and out-of-distribution tasks compared to RL$^2$, while maintaining the efficiency of the latter in the short term. Experiments are conducted on both custom and benchmark discrete domains from the meta-RL literature that exhibit a range of short-term, long-term, and complex dependencies. | RL^3: Boosting Meta Reinforcement Learning via RL inside RL^2 | [
"Abhinav Bhatia",
"Samer Nashed",
"Shlomo Zilberstein"
] | Workshop/GenPlan | [
"https://github.com/bhatiaabhinav/rl3"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=oxvgIVWGo1 | @inproceedings{
li2023understanding,
title={Understanding Representations Pretrained with Auxiliary Losses for Embodied Agent Planning},
author={Yuxuan Li and Luca Weihs},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=oxvgIVWGo1}
} | Pretrained representations from large-scale vision models have boosted the performance of downstream embodied policy learning. We look to understand whether additional self-supervised pretraining on exploration trajectories can build on these general-purpose visual representations to better support embodied planning in realistic environments. We evaluated four common auxiliary losses in embodied AI, two hindsight-based losses, and a standard imitation learning loss, by pretraining the agent's visual compression module and state belief representations with each objective and using CLIP as a representative visual backbone. The learned representations are then frozen for downstream multi-step evaluation on two goal-directed tasks. Surprisingly, we find that imitation learning on these exploration trajectories out-performs all other auxiliary losses even despite the exploration trajectories being dissimilar from the downstream tasks. This suggests that imitation of exploration may be ''all you need'' for building powerful planning representations. Additionally, we find that popular auxiliary losses can benefit from simple modifications to improve their support for downstream planning ability. | Understanding Representations Pretrained with Auxiliary Losses for Embodied Agent Planning | [
"Yuxuan Li",
"Luca Weihs"
] | Workshop/GenPlan | 2312.10069 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=oMkUQKfsCU | @inproceedings{
patil2023contrastive,
title={Contrastive Abstraction for Reinforcement Learning},
author={Vihang Patil and Markus Hofmarcher and Elisabeth Rumetshofer and Sepp Hochreiter},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=oMkUQKfsCU}
} | Learning agents with reinforcement learning is difficult when dealing with long trajectories that involve a large number of states. To address these learning problems effectively, the number of states can be reduced by abstract representations that cluster states. In principle, deep reinforcement learning can find abstract states, but end-to-end learning is unstable. We propose contrastive abstraction learning to find abstract states, where we assume that successive states in a trajectory belong to the same abstract state. Such abstract states may be basic locations, achieved subgoals, inventory, or health conditions. *Contrastive abstraction learning* first constructs clusters of state representations by contrastive learning and then applies modern Hopfield networks to determine the abstract states. The first phase of *contrastive abstraction learning* is self-supervised learning, where contrastive learning forces states with sequential proximity to have similar representations. The second phase uses modern Hopfield networks to map similar state representations to the same fixed point, i.e.\ to an abstract state. The level of abstraction can be adjusted by determining the number of fixed points of the modern Hopfield network. Furthermore, *contrastive abstraction learning* does not require rewards and facilitates efficient reinforcement learning for wide range of downstream tasks. Our experiments demonstrate the effectiveness of *contrastive abstraction learning* for reinforcement learning. | Contrastive Abstraction for Reinforcement Learning | [
"Vihang Patil",
"Markus Hofmarcher",
"Elisabeth Rumetshofer",
"Sepp Hochreiter"
] | Workshop/GenPlan | 2410.00704 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=mntDNQ5ujE | @inproceedings{
yang2023workinprogress,
title={Work-in-Progress: Using Symbolic Planning with Deep {RL} to Improve Learning},
author={Tianpei Yang and Srijita Das and Christabel Wayllace and Matthew Taylor},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=mntDNQ5ujE}
} | Deep Reinforcement Learning (DRL) has achieved expressive success across a wide range of domains. However, it is still faced with the sample-inefficiency problem that requires massive training samples to learn the optimal policy. Furthermore, the trained policy is highly dependent on the training environment which limits the generalization. In this paper, we propose the Planner-guided RL (PRL) approach to explore how symbolic planning can help DRL in terms of efficiency and generalization. Our PRL is a two-level structure that incorporates any symbolic planner as the meta-controller to derive the subgoals. The low-level controller learns how to achieve the subgoals. We evaluate PRL on Montezuma's Revenge and results show that PRL outperforms previous hierarchical methods. The evaluation of generalization is a work in progress | Work-in-Progress: Using Symbolic Planning with Deep RL to Improve Learning | [
"Tianpei Yang",
"Srijita Das",
"Christabel Wayllace",
"Matthew Taylor"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=mLXOy1iJz8 | @inproceedings{
chen2023graph,
title={Graph Neural Networks and Graph Kernels For Learning Heuristics: Is there a difference?},
author={Dillon Ze Chen and Felipe Trevizan and Sylvie Thiebaux},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=mLXOy1iJz8}
} | Graph neural networks (GNNs) have been used in various works for learning heuristics to guide
search for planning. However, they are hindered by their slow evaluation speed and their limited
expressiveness. It is also a known fact that the expressiveness of common GNNs is bounded by the
Weisfeiler-Lehman (WL) algorithm for testing graph isomorphism, with which one can generate
features for graphs. Thus, one may ask how do GNNs compare against machine learning models
operating on WL features of planning problems represented as graphs? Our experiments show that
linear models with WL features outpeform GNN models for learning heuristics for planning in the
learning track of the 2023 International Planning Competition (IPC). Most notably, our model
WL-GOOSE is the first model in the learning for planning literature which can reliably learn
heuristics from scratch that are competitive with $h^{\text{FF}}$ on problem sizes much larger than those
seen in the training set. | Graph Neural Networks and Graph Kernels For Learning Heuristics: Is there a difference? | [
"Dillon Ze Chen",
"Felipe Trevizan",
"Sylvie Thiebaux"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=iOMGogpgi2 | @inproceedings{
khetarpal2023pomrl,
title={{POMRL}: No-Regret Learning-to-Plan with Increasing Horizons},
author={Khimya Khetarpal and Claire Vernade and Brendan O'Donoghue and Satinder Singh and Tom Zahavy},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=iOMGogpgi2}
} | We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically. | POMRL: No-Regret Learning-to-Plan with Increasing Horizons | [
"Khimya Khetarpal",
"Claire Vernade",
"Brendan O'Donoghue",
"Satinder Singh",
"Tom Zahavy"
] | Workshop/GenPlan | 2212.14530 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=hbrE0MFsDV | @inproceedings{
shah2023learning,
title={Learning How to Create Generalizable Hierarchies for Robot Planning},
author={Naman Shah and Siddharth Srivastava},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=hbrE0MFsDV}
} | This paper addresses the problem of inventing and using hierarchical representations for stochastic robot-planning problems. Rather than using hand-coded state or action representations as input, it presents new methods for learning how to create a generalizable high-level action representation for long-horizon, sparse reward robot planning problems in stochastic settings with unknown dynamics. After training, this system yields a robot-class-specific but environment independent planning system that generalizes to different robots, environments, and problem instances. Given new problem instances in unseen stochastic environments, it first creates zero-shot options (without any experience on the new environment) with dense pseudo-rewards and then uses them to solve the input problem in a hierarchical planning and refinement process. Theoretical results identify sufficient conditions for completeness of the presented approach. Extensive empirical analysis shows that even in settings that go beyond these sufficient conditions, this approach convincingly outperforms baselines by $2\times$ in terms of solution time with orders of magnitude improvement in solution quality. | Learning How to Create Generalizable Hierarchies for Robot Planning | [
"Naman Shah",
"Siddharth Srivastava"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=gWhXByDv2n | @inproceedings{
pallagani2023plansformer,
title={Plansformer: Generating Symbolic Plans using Transformers},
author={Vishal Pallagani and Bharath Muppasani and Keerthiram Murugesan and Francesca Rossi and Lior Horesh and Biplav Srivastava and Francesco Fabiano and Andrea Loreggia},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=gWhXByDv2n}
} | Large Language Models (LLMs) have been the subject of active research, significantly advancing the field of Natural Language Processing (NLP). From BERT to BLOOM, LLMs have surpassed state-of-the-art results in various natural language tasks such as question answering, summarization, and text generation. Many ongoing efforts focus on understanding LLMs' capabilities, including their knowledge of the world, syntax, and semantics. However, extending the textual prowess of LLMs to symbolic reasoning has been slow and predominantly focused on tackling problems related to the mathematical field. In this paper, we explore the use of LLMs for automated planning - a branch of AI concerned with the realization of action sequences (plans) to achieve a goal, typically executed by intelligent agents, autonomous robots, and unmanned vehicles. We introduce Plansformer, an LLM fine-tuned on planning problems and capable of generating plans with favorable behavior in terms of correctness and length with reduced knowledge-engineering efforts. We also demonstrate the adaptability of Plansformer in solving different planning domains with varying complexities, owing to the transfer learning abilities of LLMs. For one configuration of Plansformer, we achieve ~97\% valid plans, out of which ~95\% are optimal for Towers of Hanoi - a puzzle-solving domain. | Plansformer: Generating Symbolic Plans using Transformers | [
"Vishal Pallagani",
"Bharath Muppasani",
"Keerthiram Murugesan",
"Francesca Rossi",
"Lior Horesh",
"Biplav Srivastava",
"Francesco Fabiano",
"Andrea Loreggia"
] | Workshop/GenPlan | 2212.08681 | [
""
] | https://huggingface.co/papers/2212.08681 | 1 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=fYUWJHNkdA | @inproceedings{
nafi2023reinforcement,
title={Reinforcement Learning with Augmentation Invariant Representation: A Non-contrastive Approach},
author={Nasik Muhammad Nafi and William Hsu},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=fYUWJHNkdA}
} | Data augmentation has been proven as an effective measure to improve generalization performance in reinforcement learning (RL). However, recent approaches directly use the augmented data to learn the value estimate or regularize the estimation, often ignoring the core essence that the model needs to learn that augmented data indeed represents the same state. In this work, we present RAIR: Reinforcement learning with Augmentation Invariant Representation that disentangles the representation learning task from the RL task and aims to learn similar latent representations for the original observation and the augmented one. Our approach learns the representation of high-dimensional visual observations in a non-contrastive self-supervised way combined with the standard RL objective. In particular, RAIR gradually pushes the latent representation of an observation closer to the representation produced for the corresponding augmented observations. Thus, our agent is more resilient to the changes in the environment. We evaluate RAIR on all sixteen environments from the RL generalization benchmark Procgen. The experimental results indicate that RAIR outperforms PPO and other data augmentation-based approaches under the standard evaluation protocol. | Reinforcement Learning with Augmentation Invariant Representation: A Non-contrastive Approach | [
"Nasik Muhammad Nafi",
"William Hsu"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=fWSPZh6PI1 | @inproceedings{
lin2023addressing,
title={Addressing Long-Horizon Tasks by Integrating Program Synthesis and State Machines},
author={Yu-An Lin and Chen-Tao Lee and Guan-Ting Liu and Pu-Jen Cheng and Shao-Hua Sun},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=fWSPZh6PI1}
} | Deep reinforcement learning excels in various domains but lacks generalizability and interoperability. Programmatic RL (Trivedi et al., 2021; Liu et al., 2023) methods reformulate solving RL tasks as synthesizing interpretable programs that can be executed in the environments. Despite encouraging results, these methods are limited to short-horizon tasks. On the other hand, representing RL policies using state machines (Inala et al., 2020) can inductively generalize to long-horizon tasks; however, it struggles to scale up to acquire diverse and complex behaviors. This work proposes Program Machine Policies (POMPs), which bridge the advantages of programmatic RL and state machine policies, allowing for the representation of complex behaviors and the address of long-term tasks. Specifically, we introduce a method that can retrieve a set of effective, diverse, compatible programs. Then, we use these programs as modes of a state machine and learn a transition function to transition among mode programs, allowing for capturing long-horizon repetitive behaviors. Our proposed framework outperforms programmatic RL and deep RL baselines on various tasks and demonstrates the ability to generalize to even longer horizons without any fine-tuning inductively. Ablation studies justify the effectiveness of our proposed search algorithm for retrieving a set of programs as modes. | Addressing Long-Horizon Tasks by Integrating Program Synthesis and State Machines | [
"Yu-An Lin",
"Chen-Tao Lee",
"Guan-Ting Liu",
"Pu-Jen Cheng",
"Shao-Hua Sun"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=fNEamfVsu6 | @inproceedings{
sch{\"a}fer2023learning,
title={Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning},
author={Lukas Sch{\"a}fer and Filippos Christianos and Amos Storkey and Stefano Albrecht},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=fNEamfVsu6}
} | Successful deployment of multi-agent reinforcement learning often requires agents to adapt their behaviour. In this work, we discuss the problem of teamwork adaptation in which a team of agents needs to adapt their policies to solve novel tasks with limited fine-tuning. Motivated by the intuition that agents need to be able to identify and distinguish tasks in order to adapt their behaviour to the current task, we propose to learn multi-agent task embeddings (MATE). These task embeddings are trained using an encoder-decoder architecture optimised for reconstruction of the transition and reward functions which uniquely identify tasks. We show that a team of agents is able to adapt to novel tasks when provided with task embeddings. We propose three MATE training paradigms: independent MATE, centralised MATE, and mixed MATE which vary in the information used for the task encoding. We show that the embeddings learned by MATE identify tasks and provide useful information which agents leverage during adaptation to novel tasks. | Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning | [
"Lukas Schäfer",
"Filippos Christianos",
"Amos Storkey",
"Stefano Albrecht"
] | Workshop/GenPlan | 2207.02249 | [
"https://github.com/uoe-agents/mate"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eIROHscxeg | @inproceedings{
caglar2023towards,
title={Towards More Likely Models for {AI} Planning},
author={Turgay Caglar and Sirine Belhaj and Tathagata Chakraborti and Michael Katz and Sarath Sreedharan},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=eIROHscxeg}
} | This is the first work to look at the application of large language models (LLMs) for the purpose of model space edits in automated planning tasks. To set the stage for this sangam, we start by enumerating the different flavors of model space problems that have been studied so far in the AI planning literature and explore the effect of an LLM on those tasks with detailed illustrative examples. We also empirically demonstrate how the performance of an LLM contrasts with combinatorial search (CS) -- an approach that has been traditionally used to solve model space tasks in planning, both with the LLM in the role of a standalone model space reasoner as well as in the role of a statistical modeling tool in concert with the CS approach as part of a two-stage process. Our experiments show promising results suggesting further forays of LLMs into the exciting world of model space reasoning for planning tasks in the future. | Towards More Likely Models for AI Planning | [
"Turgay Caglar",
"Sirine Belhaj",
"Tathagata Chakraborti",
"Michael Katz",
"Sarath Sreedharan"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eDZJTdUsfe | @inproceedings{
kirsch2023towards,
title={Towards General-Purpose In-Context Learning Agents},
author={Louis Kirsch and James Harrison and C. Freeman and Jascha Sohl-Dickstein and J{\"u}rgen Schmidhuber},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=eDZJTdUsfe}
} | Reinforcement Learning (RL) algorithms are usually hand-crafted, driven by the research and engineering of humans. An alternative approach is to automate this research process via meta-learning. A particularly ambitious objective is to automatically discover new RL algorithms from scratch that use in-context learning to learn-how-to-learn entirely from data while also generalizing to a wide range of environments. Those RL algorithms are implemented entirely in neural networks, by conditioning on previous experience from the environment, without any explicit optimization-based routine at meta-test time. To achieve generalization, this requires a broad task distribution of diverse and challenging environments. Our Transformer-based Generally Learning Agents (GLAs) are an important first step in this direction. Our GLAs are meta-trained using supervised learning techniques on an offline dataset with experiences from RL environments that is augmented with random projections to generate task diversity. During meta-testing our agents perform in-context meta-RL on entirely different robotic control problems such as Reacher, Cartpole, or HalfCheetah that were not in the meta-training distribution. | Towards General-Purpose In-Context Learning Agents | [
"Louis Kirsch",
"James Harrison",
"C. Freeman",
"Jascha Sohl-Dickstein",
"Jürgen Schmidhuber"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=cTtMNEn2Kr | @inproceedings{
chen2023goose,
title={{GOOSE}: Learning Domain-Independent Heuristics},
author={Dillon Ze Chen and Sylvie Thiebaux and Felipe Trevizan},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=cTtMNEn2Kr}
} | We present three novel graph representations of planning tasks suitable for learning domain-independent heuristics using Graph Neural Networks (GNNs) to guide search. In particular, to mitigate the issues caused by large grounded GNNs we present the first method for learning domain-independent heuristics with only the lifted representation of a planning task. We also provide a theoretical analysis of the expressiveness of our models, showing that some are more powerful than STRIPS-HGN, the only other existing model for learning domain-independent heuristics. Our experiments show that our heuristics generalise to much larger problems than those in the training set, vastly surpassing STRIPS-HGN heuristics. | GOOSE: Learning Domain-Independent Heuristics | [
"Dillon Ze Chen",
"Sylvie Thiebaux",
"Felipe Trevizan"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=boub8VqmZu | @inproceedings{
verma2023learning,
title={Learning {AI}-System Capabilities under Stochasticity},
author={Pulkit Verma and Rushang Karia and Gaurav Vipat and Anmol Gupta and Siddharth Srivastava},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=boub8VqmZu}
} | Learning interpretable generalizable models of sequential decision-making agents is essential for user-driven assessment as well as for continual agent-design processes in several AI applications. Discovering an agent's broad capabilities in terms of concepts a user understands and summarizing them for a user is a comparatively new solution approach for agent assessment. Prior work on this topic focuses on deterministic settings, or settings where the name of agent's capabilities are already known, or situations where the learning system has access to only passively collected data regarding the agent's behavior. These settings result in a limited scope and/or accuracy of the learned models. This paper presents an approach for discovering a black-box sequential decision making agent's capabilities and interactively learning an interpretable model of the agent in stochastic settings. Our approach uses an initial set of observations to discover the agent's capabilities and a hierarchical querying process to learn a probability distribution of the discovered stochastic capabilities. Our evaluation demonstrates that our method learns lifted SDM models with complex capabilities accurately. | Learning AI-System Capabilities under Stochasticity | [
"Pulkit Verma",
"Rushang Karia",
"Gaurav Vipat",
"Anmol Gupta",
"Siddharth Srivastava"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=bFPbN1nWgU | @inproceedings{
karia2023epistemic,
title={Epistemic Exploration for Generalizable Planning and Learning in Non-Stationary Stochastic Settings},
author={Rushang Karia and Pulkit Verma and Gaurav Vipat and Siddharth Srivastava},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=bFPbN1nWgU}
} | Reinforcement Learning (RL) provides a convenient framework for sequential decision making when closed-form transition dynamics are unavailable and can frequently change. However, the high sample complexity of RL approaches limits their utility in the real-world. This paper presents an approach that performs meta-level exploration in the space of models and uses the learned models to compute policies. Our approach interleaves learning and planning allowing data-efficient, task-focused sample collection in the presence of non-stationarity. We conduct an empirical evaluation on benchmark domains and show that our approach significantly outperforms baselines in sample complexity and easily adapts to changing transition systems across tasks. | Epistemic Exploration for Generalizable Planning and Learning in Non-Stationary Stochastic Settings | [
"Rushang Karia",
"Pulkit Verma",
"Gaurav Vipat",
"Siddharth Srivastava"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZGCNLgxqaU | @inproceedings{
azran2023contextual,
title={Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Guy Azran and Mohamad Hosein Danesh and Stefano Albrecht and Sarah Keren},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=ZGCNLgxqaU}
} | Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RM), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Our empirical evaluation shows that our representations improve sample efficiency and few-shot transfer in a variety of domains. | Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning | [
"Guy Azran",
"Mohamad Hosein Danesh",
"Stefano Albrecht",
"Sarah Keren"
] | Workshop/GenPlan | 2307.05209 | [
"https://github.com/CLAIR-LAB-TECHNION/multi_taxi"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=XwJ44iYC7A | @inproceedings{
quartey2023exploiting,
title={Exploiting Contextual Structure to Generate Useful Auxiliary Tasks},
author={Benedict Quartey and Ankit Shah and George Konidaris},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=XwJ44iYC7A}
} | Reinforcement learning requires interaction with environments, which can be prohibitively expensive, especially in robotics. This constraint necessitates approaches that work with limited environmental interaction by maximizing the reuse of previous experiences. We propose an approach that maximizes experience reuse while learning to solve a given task by generating and simultaneously learning useful auxiliary tasks. To generate these tasks, we construct an abstract temporal logic representation of the given task and leverage large language models to generate context-aware object embeddings that facilitate object replacements. Counterfactual reasoning and off-policy methods allow us to simultaneously learn these auxiliary tasks while solving the given target task. We combine these insights into a novel framework for multitask reinforcement learning and experimentally show that our generated auxiliary tasks share similar underlying exploration requirements as the given task, thereby maximizing the utility of directed exploration. Our approach allows agents to automatically learn additional useful policies without extra environment interaction. | Exploiting Contextual Structure to Generate Useful Auxiliary Tasks | [
"Benedict Quartey",
"Ankit Shah",
"George Konidaris"
] | Workshop/GenPlan | 2303.05038 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=XXcVWRtfBn | @inproceedings{
zisselman2023explore,
title={Explore to Generalize in Zero-Shot {RL}},
author={Ev Zisselman and Itai Lavie and Daniel Soudry and Aviv Tamar},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=XXcVWRtfBn}
} | We study zero-shot generalization in reinforcement learning---optimizing a policy on a set of training tasks to perform well on a similar but unseen test task.
To mitigate overfitting, previous work explored different notions of invariance to the task. However, on problems such as the ProcGen Maze, an adequate solution that is invariant to the task visualization does not exist, and therefore invariance-based approaches fail.
Our insight is that learning a policy that effectively $\textit{explores}$ the domain is harder to memorize than a policy that maximizes reward for a specific task, and therefore we expect such learned behavior to generalize well; we indeed demonstrate this empirically on several domains that are difficult for invariance-based approaches. Our $\textit{Explore to Generalize}$ algorithm (ExpGen) builds on this insight: we train an additional ensemble of agents that optimize reward. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions, which generalize well and drive us to a novel part of the state space, where the ensemble may potentially agree again. We show that our approach is the state-of-the-art on tasks of the ProcGen challenge that have thus far eluded effective generalization, yielding a success rate of 83% on the Maze task and 74% on Heist with $200$ training levels. ExpGen can also be combined with an invariance based approach to gain the best of both worlds, setting new state-of-the-art results on ProcGen. | Explore to Generalize in Zero-Shot RL | [
"Ev Zisselman",
"Itai Lavie",
"Daniel Soudry",
"Aviv Tamar"
] | Workshop/GenPlan | 2306.03072 | [
"https://github.com/evzissel/expgen"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=X2JKKLrAmO | @inproceedings{
li2023normalization,
title={Normalization Enhances Generalization in Visual Reinforcement Learning},
author={Lu Li and Jiafei Lyu and Guozheng Ma and Zilin Wang and Zhenjie Yang and Xiu Li and Zhiheng Li},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=X2JKKLrAmO}
} | Recent advances in visual reinforcement learning (RL) have led to impressive success in handling complex tasks. However, these methods have demonstrated limited generalization capability to visual disturbances, which poses a significant challenge for their real-world application and adaptability. Though normalization techniques have demonstrated huge success in supervised and unsupervised learning, their applications in visual RL are still scarce. In this paper, we explore the potential benefits of integrating normalization into visual RL methods with respect to generalization performance. We find that, perhaps surprisingly, incorporating suitable normalization techniques is sufficient to enhance the generalization capabilities, without any additional special design. We utilize the combination of two normalization techniques, CrossNorm and SelfNorm, for generalizable visual RL. Extensive experiments are conducted on DMControl Generalization Benchmark and CARLA to validate the effectiveness of our method. We show that our method significantly improves generalization capability while only marginally affecting sample efficiency. In particular, when integrated with DrQ-v2, our method enhances the test performance of DrQ-v2 on CARLA across various scenarios, from 14% of the training performance to 97%. | Normalization Enhances Generalization in Visual Reinforcement Learning | [
"Lu Li",
"Jiafei Lyu",
"Guozheng Ma",
"Zilin Wang",
"Zhenjie Yang",
"Xiu Li",
"Zhiheng Li"
] | Workshop/GenPlan | 2306.00656 | [
"https://github.com/lilucse/Normalization-Enhances-Generalization-in-Visual-Reinforcement-Learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=W1fyxrFx90 | @inproceedings{
yunis2023subwords,
title={Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning},
author={David Yunis and Justin Jung and Falcon Dai and Matthew Walter},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=W1fyxrFx90}
} | Exploration in sparse-reward reinforcement learning (RL) is difficult due to the need for long, coordinated sequences of actions in order to achieve any reward. Moreover, in continuous action spaces there are an infinite number of possible actions, which only increases the difficulty of exploration. One class of methods designed to address these issues forms temporally extended actions, often called skills, from interaction data collected in the same domain, and optimizes a policy on top of this new action space. Such methods require a lengthy pretraining phase in order to form the skills before reinforcement learning can begin. Given prior evidence that the full range of the continuous action space is not required in such tasks, we propose a novel approach to skill-generation with two components. First we discretize the action space through clustering, and second we leverage a tokenization technique borrowed from natural language processing to generate temporally extended actions. Using this as an action-space for RL outperforms comparable skill-based approaches in several challenging sparse-reward domains, and requires orders-of-magnitude less computation. | Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning | [
"David Yunis",
"Justin Jung",
"Falcon Dai",
"Matthew Walter"
] | Workshop/GenPlan | 2309.04459 | [
"https://github.com/dyunis/subwords_as_skills"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=W0bhHvQK60 | @inproceedings{
eysenbach2023contrastive,
title={Contrastive Representations Make Planning Easy},
author={Benjamin Eysenbach and Vivek Myers and Sergey Levine and Ruslan Salakhutdinov},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=W0bhHvQK60}
} | Probabilistic inference over time series data is challenging when observations are high-dimensional. In this paper, we show how inference questions relating to prediction and planning can have compact, closed form solutions in terms of learned representations. The key idea is to apply a variant of contrastive learning to time series data. Prior work already shows that the representations learned by contrastive learning encode a probability ratio. By first extending this analysis to show that the marginal distribution over representations is Gaussian, we can then prove that conditional distribution of future representations is also Gaussian. Taken together, these results show that a variant of temporal contrastive learning results in representations distributed according to a Gaussian Markov chain, a graphical model where inference (e.g., filtering, smoothing) has closed form solutions. For example, in one special case the problem of trajectory inference simply corresponds to linear interpolation of the initial and final state representations. We provide brief empirical results validating our theory. | Contrastive Representations Make Planning Easy | [
"Benjamin Eysenbach",
"Vivek Myers",
"Sergey Levine",
"Ruslan Salakhutdinov"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RaHA0fJMbL | @inproceedings{
yao2023inverse,
title={Inverse Reinforcement Learning with Multiple Planning Horizons},
author={Jiayu Yao and Finale Doshi-Velez and Barbara Engelhardt},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=RaHA0fJMbL}
} | In this work, we study an inverse reinforcement learning (IRL) problem where the experts are planning \textit{under a shared reward function but with different, unknown planning horizons}. Without the knowledge of discount factors, the reward function has a larger feasible solution set, which makes it harder to identify a reward function. To overcome this challenge, we develop an algorithm that in practice, can learn a reward function similar to the true reward function. We give an empirical characterization of the identifiability and generalizability of the feasible set of the reward function. | Inverse Reinforcement Learning with Multiple Planning Horizons | [
"Jiayu Yao",
"Finale Doshi-Velez",
"Barbara Engelhardt"
] | Workshop/GenPlan | 2409.18051 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RN7IqUMlOq | @inproceedings{
deng2023stochastic,
title={Stochastic Safe Action Model Learning},
author={Zihao Deng and Brendan Juba},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=RN7IqUMlOq}
} | Hand-crafting models of interactive domains is challenging, especially when the dynamics of the domain are stochastic. Therefore, it's useful to be able to automatically learn such models instead. In this work, we propose an algorithm to learn stochastic planning models where the distribution over the sets of effects for each action has a small support, but the sets may set values to an arbitrary number of state attributes (a.k.a. fluents). This class captures the benchmark domains used in stochastic planning, in contrast to the prior work that assumed independence of the effects on individual fluents. Our algorithm has polynomial time and sample complexity when the size of the support is bounded by a constant. Importantly, our learning is safe in that we learn offline from example trajectories and we guarantee that actions are only permitted in states where our model of the dynamics is guaranteed to be accurate. Moreover, we guarantee approximate completeness of the model, in the sense that if the examples are achieving goals from some distribution, then with high probability there will exist plans in our learned model that achieve goals from the same distribution. | Stochastic Safe Action Model Learning | [
"Zihao Deng",
"Brendan Juba"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QM9fIbFCW6 | @inproceedings{
agostinelli2023learning,
title={Learning Discrete Models for Classical Planning Problems},
author={Forest Agostinelli and Misagh Soltani},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=QM9fIbFCW6}
} | For many sequential decision making domains, planning is often necessary to solve problems. However, for domains such as those encountered in robotics, the transition function, also known as the world model, is often unknown and coding such a model by hand is often impractical. While planning could be done with a world model trained from observed transitions, such approaches are limited by errors accumulating when the model is applied across many timesteps as well as the inability to re-identify states. Furthermore, even given an accurate world model, domain-independent planning methods may not be able to reliably solve problems while domain-specific information required to construct informative heuristics may not be readily available. While methods exist that can learn domain-specific heuristic functions in a largely domain-independent fashion, such as DeepCubeA, these methods assume a given world model and may also assume that the goal is predetermined. To solve these problems, we introduce DeepCubeAI, a domain-independent algorithm that learns a world model that represents states in a discrete latent space, learns a heuristic function that generalizes over start and goal states using this learned model, and combines the learned model and learned heuristic function with search to solve problems. Since the latent space is discrete, we can prevent the accumulation of small errors by rounding and we can re-identify states by simply comparing two binary vectors. In our experiments on a pixel representation of the Rubik's cube and Sokoban, we find that DeepCubeAI is able to apply the model for thousands of steps without accumulating any error. Furthermore, DeepCubeAI solves over 99% of test instances in all domains and generalizes across goal states. | Learning Discrete World Models for Classical Planning Problems | [
"Forest Agostinelli",
"Misagh Soltani"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PgDbFx9bk8 | @inproceedings{
rodriguez-sanchez2023learning,
title={Learning Abstract World Models for Value-preserving Planning with Options},
author={Rafael Rodriguez-Sanchez and George Konidaris},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=PgDbFx9bk8}
} | General-purpose agents require fine-grained controls and rich sensory inputs to perform a wide range of tasks. However, this complexity often leads to intractable decision-making. Traditionally, agents are provided with task-specific action and observation spaces to mitigate this challenge, but this reduces autonomy.
Instead, agents must be capable of building state-action spaces at the correct abstraction level from their sensorimotor experiences. We leverage the structure of a given set of temporally-extended actions to learn abstract Markov decision processes (MDPs) that operate at a higher level of temporal and state granularity. We characterize state abstractions necessary to ensure that planning with these skills, by simulating trajectories in the abstract MDP, results in policies with bounded value loss in the original MDP.
We evaluate our approach in goal-based navigation environments that require continuous abstract states to plan successfully and show that abstract model learning improves the sample efficiency of planning and learning. | Learning Abstract World Models for Value-preserving Planning with Options | [
"Rafael Rodriguez-Sanchez",
"George Konidaris"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=P6ancmHdwH | @inproceedings{
shelke2023multiagent,
title={Multi-Agent Learning of Efficient Fulfilment and Routing Strategies in E-Commerce},
author={Omkar Shelke and Pranavi Pathakota and Anandsingh Chauhan and Hardik Meisheri and Harshad Khadilkar and Balaraman Ravindran},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=P6ancmHdwH}
} | This paper presents an integrated algorithmic framework for minimising product delivery costs in e-commerce (known as the cost-to-serve or C2S). One of the major challenges in e-commerce is the large volume of spatio-temporally diverse orders from multiple customers, each of which has to be fulfilled from one of several warehouses using a fleet of vehicles. This results in two levels of decision-making: (i) selection of a fulfillment node for each order (including the option of deferral to a future time), and then (ii) routing of vehicles (each of which can carry multiple orders originating from the same warehouse). We propose an approach that combines graph neural networks and reinforcement learning to train the node selection and vehicle routing agents. We include real-world constraints such as warehouse inventory capacity, vehicle characteristics such as travel times, service times, carrying capacity, and customer constraints including time windows for delivery. The complexity of this problem arises from the fact that outcomes (rewards) are driven both by the fulfillment node mapping as well as the routing algorithms, and are spatio-temporally distributed. Our experiments show that this algorithmic pipeline outperforms pure heuristic policies. | Multi-Agent Learning of Efficient Fulfilment and Routing Strategies in E-Commerce | [
"Omkar Shelke",
"Pranavi Pathakota",
"Anandsingh Chauhan",
"Hardik Meisheri",
"Harshad Khadilkar",
"Balaraman Ravindran"
] | Workshop/GenPlan | 2311.16171 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NqvB1U0HSJ | @inproceedings{
liu2023integrating,
title={Integrating Planning and Deep Reinforcement Learning via Automatic Induction of Task Substructures},
author={Jung-Chun Liu and Chi-Hsien Chang and Shao-Hua Sun and Tian-Li Yu},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=NqvB1U0HSJ}
} | Despite recent advancements, deep reinforcement learning (DRL) still struggles at learning sparse-reward goal-directed tasks, while classical planning excels at addressing hierarchical tasks, yet most of the methods rely on assumptions about pre-defined subtasks. To bridge the best of both worlds, we propose a framework that integrates DRL with classical planning by automatically inducing task structures and substructures from a few demonstrations. Specifically, we adopt abstraction mapping formulation and define critical actions that lead to the transition at the abstraction level. The framework induces critical action schemata regarded as subtasks to solve the problems. Symbolic regression is used for substructure induction by employing genetic programming where the program model reflects prior domain knowledge of effect rules. We compare the proposed framework to state-of-the-art DRL algorithms, imitation learning methods, and an exploration approach in various domains. Experimental results on various tasks show that our proposed framework outperforms all the abovementioned algorithms in terms of sample efficiency and task performance. Moreover, our framework achieves strong generalization performance by effectively inducing new rules and composing task structures. Ablation studies justify the design of our induction module and the proposed genetic programming procedure. | Integrating Planning and Deep Reinforcement Learning via Automatic Induction of Task Substructures | [
"Jung-Chun Liu",
"Chi-Hsien Chang",
"Shao-Hua Sun",
"Tian-Li Yu"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MP1CXDOgAF | @inproceedings{
laidlaw2023a,
title={A Theoretical Explanation of Deep {RL} Performance in Stochastic Environments},
author={Cassidy Laidlaw and Banghua Zhu and Stuart Russell and Anca Dragan},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=MP1CXDOgAF}
} | Reinforcement learning (RL) theory has largely focused on proving minimax sample complexity bounds. These require *strategic* exploration algorithms that use relatively limited function classes for representing the policy or value function. Our goal is to explain why deep RL algorithms often perform well in practice, despite using *random* exploration and much more expressive function classes like neural networks. Our work arrives at an explanation by showing that many stochastic MDPs can be solved by performing only a few steps of value iteration on the random policy's Q function and then acting greedily. When this is true, we find that it is possible to separate the *exploration* and *learning* components of RL, making it much easier to analyze. We introduce a new RL algorithm, SQIRL, that iteratively learns a near-optimal policy by exploring randomly to collect rollouts and then performing a limited number of steps of fitted-Q iteration over those rollouts. We find that any regression algorithm that satisfies basic in-distribution generalization properties can be used in SQIRL to efficiently solve common MDPs. This can explain why deep RL works with complex function approximators like neural networks, since it is empirically established that neural networks generalize well in-distribution. Furthermore, SQIRL explains why random exploration works well in practice, since we show many environments can be solved by effectively estimating the random policy's Q-function and then applying zero or a few steps of value iteration. We leverage SQIRL to derive instance-dependent sample complexity bounds for RL that are exponential only in an "effective horizon" of lookahead—which is typically much smaller than the full horizon—and on the complexity of the class used for function approximation. Empirically, we also find that SQIRL performance strongly correlates with PPO and DQN performance in a variety of stochastic environments, supporting that our theoretical analysis is predictive of practical performance. | A Theoretical Explanation of Deep RL Performance in Stochastic Environments | [
"Cassidy Laidlaw",
"Banghua Zhu",
"Stuart Russell",
"Anca Dragan"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=LZjV4rHbzB | @inproceedings{
nayyar2023learning,
title={Learning Generalizable Symbolic Options for Transfer in Reinforcement Learning},
author={Rashmeet Kaur Nayyar and Shivanshu Verma and Siddharth Srivastava},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=LZjV4rHbzB}
} | This paper presents a new approach for Transfer Reinforcement Learning (RL) for Stochastic Shortest Path (SSP) problems in factored domains with unknown transition functions. We take as input a set of problem instances with sparse reward functions. The presented approach first learns a semantically well-defined state abstraction and then uses this abstraction to invent high-level options, to learn abstract policies for executing them, as well as to create abstract symbolic representations for representing them. Given a new problem instance, our overall approach conducts a novel bi-directional search over the learned option representations while also inventing new options as needed. Our main contributions are approaches for continually learning transferable, generalizable knowledge in the form of symbolically represented options, as well as for integrating search techniques with RL to solve new problems by efficiently composing the learned options. Empirical results show that the resulting approach effectively transfers learned knowledge and achieves superior sample efficiency compared to SOTA methods. | Learning Generalizable Symbolic Options for Transfer in Reinforcement Learning | [
"Rashmeet Kaur Nayyar",
"Shivanshu Verma",
"Siddharth Srivastava"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=KulxfrmzDD | @inproceedings{
kushwah2023inductive,
title={Inductive Generalization in Reinforcement Learning from Specifications},
author={Rohit kushwah and Vignesh Subramanian and Suguman Bansal and Subhajit Roy},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=KulxfrmzDD}
} | Reinforcement Learning (RL) from logical specifications is a promising approach to learning control policies for complex long-horizon tasks. While these algorithms showcase remarkable scalability and efficiency in learning, a persistent hurdle lies in their limited ability to generalize the policies they generate. In this work, we present an inductive framework to improve policy generalization from logical specifications. We observe that logical specifications can be used to define a class of inductive tasks known as repeated tasks. These are tasks with similar overarching goals but differing inductively in low-level predicates and distributions. Hence, policies for repeated tasks should also be inductive. To this end, we present a compositional approach that learns policies for unseen repeated tasks by training on few repeated tasks only. Our approach is evaluated on challenging control benchmarks with continuous state and action spaces, showing promising results in handling long-horizon tasks with improved generalization. | Inductive Generalization in Reinforcement Learning from Specifications | [
"Rohit kushwah",
"Vignesh Subramanian",
"Suguman Bansal",
"Subhajit Roy"
] | Workshop/GenPlan | 2406.03651 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=JNIv4ghbGy | @inproceedings{
banerjee2023mermaide,
title={{MERMAIDE}: Learning to Align Learners using Model-Based Meta-Learning},
author={Arundhati Banerjee and Soham Phade and Stefano Ermon and Stephan Zheng},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=JNIv4ghbGy}
} | We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen *learning* agent in order to induce desirable outcomes. This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people. Moreover, the principal should be few-shot adaptable and minimize the number of interventions, because interventions are often costly. We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents with different learning strategies and reward functions. We validate this approach step-by-step. First, in a Stackelberg setting with a best-response agent, we show that meta-learning enables quick convergence to the theoretically known Stackelberg equilibrium at test time, although noisy observations severely increase the sample complexity. We then show that our model-based meta-learning approach is cost-effective in intervening on bandit agents with unseen explore-exploit strategies. Finally, we outperform baselines that use either meta-learning or agent behavior modeling, in both $0$-shot and $1$-shot settings with partial agent information. | MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning | [
"Arundhati Banerjee",
"Soham Phade",
"Stefano Ermon",
"Stephan Zheng"
] | Workshop/GenPlan | 2304.04668 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=IpdZ6UX6jd | @inproceedings{
jacob2023modeling,
title={Modeling Boundedly Rational Agents with Latent Inference Budgets},
author={Athul Jacob and Abhishek Gupta and Jacob Andreas},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=IpdZ6UX6jd}
} | We study the problem of modeling a population of agents pursuing unknown goals subject to unknown computational constraints. In standard models of bounded rationality, sub-optimal decision-making is simulated by adding homoscedastic noise to optimal decisions rather than actually simulating constrained inference. In this work, we introduce a latent inference budget model (L-IBM) that models these constraints explicitly, via a latent variable (inferred jointly with a model of agents’ goals) that controls the runtime of an iterative inference algorithm. L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors. In three modeling tasks—inferring navigation goals from routes, inferring communicative intents from human utterances, and predicting next moves in human chess games—we show that L-IBMs match or outperforms Boltzmann models of decision-making under uncertainty. Moreover, the inferred inference budgets are themselves meaningful, efficient to compute, and correlated with measures of player skill, partner skill and task difficulty. | Modeling Boundedly Rational Agents with Latent Inference Budgets | [
"Athul Jacob",
"Abhishek Gupta",
"Jacob Andreas"
] | Workshop/GenPlan | 2312.04030 | [
""
] | https://huggingface.co/papers/2312.04030 | 2 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=HX3HZbUXrW | @inproceedings{
zhang2023paddle,
title={{PADDLE}: Logic Program Guided Policy Reuse in Deep Reinforcement Learning},
author={Hao Zhang and Tianpei Yang and YAN ZHENG and Jianye HAO and Matthew E. Taylor},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=HX3HZbUXrW}
} | Learning new skills through previous experience is common in human life, which is the core idea of Transfer Reinforcement Learning (TRL). This requires the agent to learn \emph{when} and \emph{which} source policy is the best to reuse as the target task's policy, and \emph{how} to reuse the source policy. Most TRL methods learn, transfer, and reuse black-box policies, which is hard to explain 1) when to reuse, 2) which source policy is effective, and 3) reduces transfer efficiency. In this paper, we propose a novel TRL method called \textbf{P}rogr\textbf{A}m gui\textbf{D}e\textbf{D} po\textbf{L}icy r\textbf{E}use (PADDLE) that can measure the logic similarities between tasks and transfer knowledge with interpretable cause-effect logic to the target task. To achieve this, we first propose a hybrid decision model that synthesizes high-level logic programs and learns low-level DRL policy to learn multiple source tasks. Second, we estimate the logic similarity between the target task and the source tasks and combine it with the low-level policy similarity to select the appropriate source policy as the guiding policy for the target task. Experimental results show that our method can effectively select the appropriate source tasks to guide learning on the target task, outperforming black-box TRL methods. | PADDLE: Logic Program Guided Policy Reuse in Deep Reinforcement Learning | [
"Hao Zhang",
"Tianpei Yang",
"YAN ZHENG",
"Jianye HAO",
"Matthew E. Taylor"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Ghl9pYaVh5 | @inproceedings{
jin2023minibehavior,
title={Mini-{BEHAVIOR}: A Procedurally Generated Benchmark for Long-horizon Decision-Making in Embodied {AI}},
author={Emily Jin and Jiaheng Hu and Zhuoyi Huang and Ruohan Zhang and Jiajun Wu and Li Fei-Fei and Roberto Mart{\'\i}n-Mart{\'\i}n},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=Ghl9pYaVh5}
} | We present Mini-BEHAVIOR, a novel benchmark for embodied AI that challenges agents to plan and solve complex activities resembling everyday human household tasks. The Mini-BEHAVIOR environment extends the widely used MiniGrid grid world with new modes of actuation, combining navigation and manipulation actions, multiple objects, states, scenes, and activities defined in first-order logic.
Mini-BEHAVIOR implements various household tasks from the original BEHAVIOR benchmark, along with starter code for data collection and reinforcement learning agent training. Together with Mini-BEHAVIOR, we also include a procedural generation mechanism to create countless variations of each task and support the study of plan generalization and open-ended learning. Mini-BEHAVIOR is fast and easy to use and extend, providing the benefits of rapid prototyping while striking a good balance between symbolic-level decision-making and physical realism, complexity, and embodiment constraints found in complex embodied AI benchmarks. Our goal with Mini-BEHAVIOR is to provide the community with a fast, easy-to-use and modify, open-ended benchmark for developing and evaluating decision-making and generalizing planning solutions for embodied AI. Code is available at https://github.com/StanfordVL/mini_behavior. | Mini-BEHAVIOR: A Procedurally Generated Benchmark for Long-horizon Decision-Making in Embodied AI | [
"Emily Jin",
"Jiaheng Hu",
"Zhuoyi Huang",
"Ruohan Zhang",
"Jiajun Wu",
"Li Fei-Fei",
"Roberto Martín-Martín"
] | Workshop/GenPlan | 2310.01824 | [
"https://github.com/stanfordvl/mini_behavior"
] | https://huggingface.co/papers/2310.01824 | 0 | 1 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GFyCqKhFru | @inproceedings{
lee2023hierarchical,
title={Hierarchical Reinforcement Learning with {AI} Planning Models},
author={Junkyu Lee and Michael Katz and Don Joven Agravante and Miao Liu and Geraud Nangue Tasse and Tim Klinger and Shirin Sohrabi},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=GFyCqKhFru}
} | Deep Reinforcement Learning (DRL) has shown breakthroughs in solving challenging problems, such as pixel-based games and continuous control tasks. In complex environments, infusing prior domain knowledge is essential to achieve sample efficiency and generalization.
Neuro-symbolic AI seeks systematic domain knowledge infusion into neural network-based learning, and existing neuro-symbolic approaches for sequential decision-making leverage hierarchical reinforcement learning (HRL) by infusing symbolically specified prior knowledge on desired trajectories.
However, this requires finding symbolic solutions in RL environments before learning, and it is difficult to handle the divergence between unknown RL dynamics and prior knowledge.
Such shortcomings result in loose and manual neuro-symbolic integration and degrade the generalization capability.
In this paper, we integrate the options framework in HRL with an AI planning model to resolve the shortcomings in earlier approaches and generalize beyond RL environments where pre-specified partial solutions are valid. Our approach defines options from AI planning operators by establishing the connection between the two transition systems in the options framework and the AI planning task. Then, we show an option policy learning method that integrates an AI planner and model-free DRL algorithms with intrinsic rewards, encouraging consistency between the two transition systems.
We design a suite of MiniGrid environments that cover the increasing levels of difficulties in exploration, where our empirical evaluation clearly shows the advantage of HRL with AI planning models.
The code is available at https://github.com/IBM/parl_agents and
https://github.com/IBM/parl_annotations | Hierarchical Reinforcement Learning with AI Planning Models | [
"Junkyu Lee",
"Michael Katz",
"Don Joven Agravante",
"Miao Liu",
"Geraud Nangue Tasse",
"Tim Klinger",
"Shirin Sohrabi"
] | Workshop/GenPlan | 2203.00669 | [
"https://github.com/IBM/parl_agents"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G8UH41ZzzA | @inproceedings{
juba2023learning,
title={Learning Safe Action Models with Partial Observability},
author={Brendan Juba and Hai S Le and Roni Stern},
booktitle={NeurIPS 2023 Workshop on Generalization in Planning},
year={2023},
url={https://openreview.net/forum?id=G8UH41ZzzA}
} | A common approach for solving planning problems is to model them in a formal language such as the Planning Domain Definition Language (PDDL), and then use an appropriate PDDL planner. Several algorithms for learning PDDL models from observations have been proposed but plans created with these learned models may not be sound. We propose two algorithms for learning PDDL models that are guaranteed to be safe to use even when given observations that include partially observable states. We analyze these algorithms theoretically, characterizing the sample complexity each algorithm requires to guarantee probabilistic completeness. We also show experimentally that our algorithms are often better than FAMA, a state-of-the-art PDDL learning algorithm. | Learning Safe Action Models with Partial Observability | [
"Brendan Juba",
"Hai S Le",
"Roni Stern"
] | Workshop/GenPlan | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.