bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=RCiRtdERCW | @inproceedings{
krubi{\'n}ski2023basic,
title={Basic Arithmetic Properties in the Space of Language Model Prompts},
author={Mateusz Krubi{\'n}ski},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=RCiRtdERCW}
} | Large pre-trained neural Language Models (LLMs) that can effectively utilize enormous amounts of unlabeled textual data have recently changed the whole field of Natural Language Processing. By utilizing prompting techniques enabled by the in-context learning capabilities, LLMs have been shown to perform on par with dedicated models trained for downstream tasks. One such a task is numerical reasoning and, in particular, the ability to conduct basic arithmetic operations. The question we wish to answer is whether the basic properties of arithmetic operations, such as the commutative property, hold in the space of LLM prompts – does asking the LLM to compute 13+37 vs 37+13 result, on average, in the same outcome? In contrast to most previous works, which reported Accuracy only, we take a closer look (MAE, Pearson's R) at the error distribution to better understand the performance with regard to prompt perturbations and scaling laws. | Basic Arithmetic Properties in the Space of Language Model Prompts | [
"Mateusz Krubiński"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ODOJuAM4Qj | @inproceedings{
welleck2023llmstep,
title={llmstep: {LLM} proofstep suggestions in Lean},
author={Sean Welleck and Rahul Saha},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=ODOJuAM4Qj}
} | We present $\texttt{llmstep}$, a tool for suggesting proof steps with a language model in the Lean 4 proof assistant. $\texttt{llmstep}$ is a Lean 4 tactic that sends a user's proof state to a server hosting a language model. The language model generates suggestions, which are checked in Lean and displayed to a user in their development environment. We provide a baseline language model, along with code for fine-tuning and evaluation to support further development. We provide server implementations that run on CPU, a CUDA GPU, or a Google Colab notebook, as a step towards fast, effective language model suggestions for any user. | llmstep: LLM proofstep suggestions in Lean | [
"Sean Welleck",
"Rahul Saha"
] | Workshop/MATH-AI | 2310.18457 | [
"https://github.com/wellecks/llmstep"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NxHl2SPhyT | @inproceedings{
wu2023lemur,
title={Lemur: Integrating Large Language Models in Automated Program Verification},
author={Haoze Wu and Clark Barrett and Nina Narodytska},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=NxHl2SPhyT}
} | The demonstrated code-understanding capability of LLMs raises the question of whether they can be used for automated program verification, a task that typically demands high-level abstract reasoning about program properties that is challenging for verification tools. We propose a general methodology to combine the power of LLMs and automated reasoners for automated program verification. We formally describe this methodology as a set of derivation rules and prove its soundness. We instantiate the calculus as a sound automated verification procedure, which led to practical improvements on a set of synthetic and competition benchmarks. | Lemur: Integrating Large Language Models in Automated Program Verification | [
"Haoze Wu",
"Clark Barrett",
"Nina Narodytska"
] | Workshop/MATH-AI | [
"https://github.com/wu-haoze/lemur-program-verification"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Ni1KnzY5KN | @inproceedings{
wang2023learning,
title={Learning Multi-Step Reasoning by Solving Arithmetic Tasks},
author={Tianduo Wang and Wei Lu},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=Ni1KnzY5KN}
} | Mathematical reasoning is regarded as a neces- sary ability for Language Models (LMs). Recent works demonstrate large LMs’ impressive performance in solving math problems. The success is attributed to their Chain-of-Thought (CoT) reasoning abilities, i.e., the ability to decompose complex questions into step-by-step reasoning chains, but such ability seems only to emerge from models with abundant parameters. This work investigates how to incorporate relatively small LMs with the capabilities of multi-step reasoning. We propose to inject such abilities by continually pre-training LMs on a synthetic dataset MsAT which is composed of Multi-step Arithmetic Tasks. Our experiments on four math word problem datasets show the effectiveness of the proposed method in enhancing LMs’ math reasoning abilities. | Learning Multi-Step Reasoning by Solving Arithmetic Tasks | [
"Tianduo Wang",
"Wei Lu"
] | Workshop/MATH-AI | 2306.01707 | [
"https://github.com/TianduoWang/MsAT"
] | https://huggingface.co/papers/2306.01707 | 1 | 1 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=NdSGKZvX3z | @inproceedings{
abdool2023continual,
title={Continual Learning and Out of Distribution Generalization in a Systematic Reasoning Task},
author={Mustafa Abdool and Andrew Nam and James McClelland},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=NdSGKZvX3z}
} | Humans often learn new problem solving strategies from a narrow range of examples and generalize to examples out of the distribution (OOD) used in learning, but such generalization remains a challenge for neural networks. This impacts learning mathematical techniques, which can apply to unbounded problem spaces (e.g. all real numbers). We explore this limitation using neural networks trained on strategies for solving specified cells in $6\times6$ Sudoku puzzles using a novel curriculum, where models first learn two preliminary tasks, then we assess OOD generalization during training on a subset of the set of possible training examples of a more complex solution strategy. Baseline models master the training distribution, but fail to generalize OOD. However, we introduce a combination of extensions that is sufficient to support highly accurate and reliable OOD generalization. These results suggest directions for improving the robustness of models trained with the highly imbalanced data distributions in natural data sets. | Continual Learning and Out of Distribution Generalization in a Systematic Reasoning Task | [
"Mustafa Abdool",
"Andrew Nam",
"James McClelland"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NYO24rdxLA | @inproceedings{
xue2023vertical,
title={Vertical {AI}-driven Scientific Discovery},
author={Yexiang Xue},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=NYO24rdxLA}
} | Automating scientific discovery has been a grand goal of Artificial Intelligence (AI) and will bring tremendous societal impact if it succeeds. Despite exciting progress, most endeavor in learning scientific equations from experiment data focuses on the horizontal discovery paths, i.e., they directly search for the best equation in the full hypothesis space. Horizontal paths are challenging because of the associated exponentially large search space. Our work explores an alternative vertical path, which builds scientific equations in an incremental way, starting from one that models data in control variable experiments in which most variables are held as constants. It then extends expressions learned in previous generations via adding new independent variables, using new control variable experiments in which these variables are allowed to vary. This vertical path was motivated by human scientific discovery processes. Experimentally, we demonstrate that such vertical discovery paths expedite symbolic regression. It also improves learning physics models describing nano-structure evolution in computational materials science. | Vertical AI-driven Scientific Discovery | [
"Yexiang Xue"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HkpnfVmjaR | @inproceedings{
okur2023spoken,
title={Spoken Language Understanding Evaluations for Home-Based Basic Math Learning},
author={Eda Okur and Saurav Sahay and Lama Nachman},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=HkpnfVmjaR}
} | Enriching the quality of early childhood education with interactive math learning at home systems, empowered by recent advances in conversational AI technologies, is slowly becoming a reality. With this motivation, we implement a multimodal dialogue system to support play-based learning experiences at home, guiding kids to master basic math concepts. This work explores the Spoken Language Understanding (SLU) pipeline within a task-oriented dialogue system, with cascading Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) components evaluated on our Kid Space home deployment data with children going through gamified math learning activities. We validate the advantages of a multi-task architecture for NLU and experiment with a diverse set of pretrained language representations for Intent Recognition and Entity Extraction in the math learning domain. To recognize kids' speech in realistic home environments, we investigate several ASR systems, including the Google Cloud and the recent open-source Whisper solutions with varying model sizes. We evaluate the SLU pipeline by testing our best-performing NLU models on noisy ASR output to inspect the challenges of understanding children's speech for math learning in authentic homes. | Spoken Language Understanding Evaluations for Home-Based Basic Math Learning | [
"Eda Okur",
"Saurav Sahay",
"Lama Nachman"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EUoe9ujR0C | @inproceedings{
frieder2023llms,
title={{LLM}s vs {ITP}s},
author={Simon Frieder and Rashid Alawadhi and Martin Trimmel and Klaus Gy},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=EUoe9ujR0C}
} | Wiedijk's list of 100 theorems provides a benchmark for comparing interactive theorem provers (ITPs) and their mathematics libraries. As shown by the GHOSTS dataset, large language models (LLMs) can also serve as searchable libraries of mathematics, given their capacity to ingest vast amounts of mathematical literature during their pre-training or finetuning phases. ITP libraries are the only other repositories of comparable size and range of mathematical intricacy. This paper presents the first comparison between these two unique mathematical resources, centered on Wiedijk's list. Beyond the intrinsic interest of such a comparison, we discuss the importance of analyzing whether knowledge contained in LLMs (represented by GPT-4 and Claude 2) matches that encoded in ITPs. This analysis contributes thus further to advance the intersection between LLM and ITP technology (examples being tasks like autoformalization, LLM-guided proof generation, or proof completion) by ensuring LLMs possess, beyond ITP code generation capabilities, sufficient mathematical knowledge to carry out the desired formalization. The dataset with our findings, called "LLMKnow", is made available to the public. | LLM vs ITP | [
"Simon Frieder",
"Martin Trimmel",
"Rashid Alawadhi",
"Klaus Gy"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=C9X5sXa2k1 | @inproceedings{
song2023towards,
title={Towards Large Language Models as Copilots for Theorem Proving in Lean},
author={Peiyang Song and Kaiyu Yang and Anima Anandkumar},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=C9X5sXa2k1}
} | Theorem proving is an important challenge for large language models (LLMs), as formal proofs can be checked rigorously by proof assistants such as Lean, leaving no room for hallucination. Existing LLM-based provers try to prove theorems in a fully autonomous mode without human intervention. In this mode, they struggle with novel and challenging theorems, for which human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, a framework for running neural network inference in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Using Lean Copilot, we build tools for suggesting proof steps and completing intermediate proof goals using LLMs. Experimental results demonstrate the effectiveness of our method in assisting humans compared to existing rule-based proof automation in Lean. | Towards Large Language Models as Copilots for Theorem Proving in Lean | [
"Peiyang Song",
"Kaiyu Yang",
"Anima Anandkumar"
] | Workshop/MATH-AI | 2404.12534 | [
"https://github.com/lean-dojo/leancopilot"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BENuWCJuTU | @inproceedings{
upadhyay2023cnn,
title={{CNN} models' sensitivity to numerosity concepts},
author={Neha Upadhyay and Sashank Varma},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=BENuWCJuTU}
} | The nature of number is a classic question in the philosophy of mathematics. Cognitive scientists have shown that numbers are mentally represented as magnitudes organized as a mental number line (MNL). Here we ask whether CNN models, in learning to classify images, also learn about number and numerosity ‘for free’. This was the case. A representative model showed the distance, size, and ratio effects that are the signatures of magnitude representations in humans. An MDS analysis of their latent representations found a close resemblance to the MNL documented in people. These findings challenge the developmental science proposal that numbers are part of the ‘core knowledge’ that all human infants possess, and instead serve as an existence proof of the learnability of numerical concepts. | CNN models' sensitivity to numerosity concepts | [
"Neha Upadhyay",
"Sashank Varma"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Aca2csudEX | @inproceedings{
lu2023chameleon,
title={Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models},
author={Pan Lu and Baolin Peng and Hao Cheng and Michel Galley and Kai-Wei Chang and Ying Nian Wu and Song-Chun Zhu and Jianfeng Gao},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=Aca2csudEX}
} | Large language models (LLMs) have achieved remarkable progress in solving various natural language processing tasks due to emergent reasoning abilities. However, LLMs have inherent limitations as they are incapable of accessing up-to-date information (stored on the Web or in task-specific knowledge bases), using external tools, and performing precise mathematical and logical reasoning. In this paper, we present Chameleon, an AI system that mitigates these limitations by augmenting LLMs with plug-and-play modules for compositional reasoning. Chameleon synthesizes programs by composing various tools (e.g., LLMs, off-the-shelf vision models, web search engines, Python functions, and heuristic-based modules) for accomplishing complex reasoning tasks. At the heart of Chameleon is an LLM-based planner that assembles a sequence of tools to execute to generate the final response. We showcase the effectiveness of Chameleon on two multi-modal knowledge-intensive reasoning tasks: ScienceQA and TabMWP. Chameleon, powered by GPT-4, achieves an 86.54% overall accuracy on ScienceQA, improving the best published few-shot result by 11.37%. On TabMWP, GPT-4-powered Chameleon improves the accuracy by 17.0%, lifting the state of the art to 98.78%. Our analysis also shows that the GPT-4-powered planner exhibits more consistent and rational tool selection via inferring potential constraints from instructions, compared to a ChatGPT-powered planner. | Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models | [
"Pan Lu",
"Baolin Peng",
"Hao Cheng",
"Michel Galley",
"Kai-Wei Chang",
"Ying Nian Wu",
"Song-Chun Zhu",
"Jianfeng Gao"
] | Workshop/MATH-AI | 2304.09842 | [
""
] | https://huggingface.co/papers/2304.09842 | 1 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=A3W864NIW2 | @inproceedings{
wang2023scibench,
title={{SCIBENCH}: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models},
author={Xiaoxuan Wang and Ziniu Hu and Pan Lu and Yanqiao Zhu and Jieyu Zhang and Satyen Subramaniam and Arjun Loomba and Shichang Zhang and Yizhou Sun and Wei Wang},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=A3W864NIW2}
} | Recent advances in Large Language Models (LLMs) have demonstrated notable progress on many mathematical benchmarks. However, most of these benchmarks only contain problems grounded in junior and senior high school subjects, contain only multiple-choice questions, and are confined to a limited scope of elementary arithmetic operations.
To address these issues, this paper introduces an expansive benchmark suite Scibench that aims to systematically examine the reasoning capabilities required for solving complex scientific problems. Scibench contains two datasets: an open set featuring a range of collegiate-level scientific problems, and a closed set comprising problems from undergraduate-level exams.
Based on the two datasets, we conduct an in-depth benchmarking study of five representative LLMs with various prompting strategies. Furthermore, through a detailed user study, we show that no single prompting strategy significantly outperforms the others and some strategies that demonstrate improvements in certain problem-solving skills could result in declines in other skills. | SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | [
"Xiaoxuan Wang",
"Ziniu Hu",
"Pan Lu",
"Yanqiao Zhu",
"Jieyu Zhang",
"Satyen Subramaniam",
"Arjun Loomba",
"Shichang Zhang",
"Yizhou Sun",
"Wei Wang"
] | Workshop/MATH-AI | 2307.10635 | [
"https://github.com/mandyyyyii/scibench"
] | https://huggingface.co/papers/2307.10635 | 3 | 8 | 0 | 10 | [] | [
"xw27/scibench"
] | [
"LLM360/de-arena",
"tsteffek/de-arena"
] | [] | [
"xw27/scibench"
] | [
"LLM360/de-arena",
"tsteffek/de-arena"
] | 1 | poster |
null | https://openreview.net/forum?id=8tt9KxyV2s | @inproceedings{
ye2023satlm,
title={Sat{LM}: Satisfiability-Aided Language Models Using Declarative Prompting},
author={Xi Ye and Qiaochu Chen and Isil Dillig and Greg Durrett},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=8tt9KxyV2s}
} | Prior work has combined chain-of-thought prompting in large language models (LLMs) with programmatic representations to perform reasoning. While such an approach works well for tasks that only require forward reasoning (e.g., straightforward arithmetic), it is less effective for problems that require more sophisticated planning and search. In this paper, we propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of LLMs. We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
By offloading the actual reasoning task to an automated theorem prover, our approach can guarantee the correctness of the answer with respect to the parsed specification and avoid planning errors in the solving process.
We evaluate SatLM on 6 datasets and show that it consistently outperforms program-aided LMs in an imperative paradigm.
In particular, SatLM outperforms program-aided LMs by more than 20% on a challenging subset of the GSM arithmetic reasoning dataset; SatLM also achieves a new SoTA on LSAT and BoardgameQA. | SatLM: Satisfiability-Aided Language Models Using Declarative Prompting | [
"Xi Ye",
"Qiaochu Chen",
"Isil Dillig",
"Greg Durrett"
] | Workshop/MATH-AI | 2305.09656 | [
"https://github.com/xiye17/sat-lm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=8r9HkIi4Rr | @inproceedings{
sawada2023arb,
title={{ARB}: Advanced Reasoning Benchmark for Large Language Models},
author={Tomohiro Sawada and Daniel Paleka and Alexander Havrilla and Pranav Tadepalli and Paula Vidas and Alexander Kranias and John Nay and Kshitij Gupta and Aran Komatsuzaki},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=8r9HkIi4Rr}
} | Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. We find promising agreement between annotators and GPT-4 rubric evaluation scores. | ARB: Advanced Reasoning Benchmark for Large Language Models | [
"Tomohiro Sawada",
"Daniel Paleka",
"Alexander Havrilla",
"Pranav Tadepalli",
"Paula Vidas",
"Alexander Kranias",
"John Nay",
"Kshitij Gupta",
"Aran Komatsuzaki"
] | Workshop/MATH-AI | 2307.13692 | [
""
] | https://huggingface.co/papers/2307.13692 | 5 | 16 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=8eeIlrtluJ | @inproceedings{
charton2023learning,
title={Learning the greatest divisor - Explainable predictions in transformers},
author={Francois Charton},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=8eeIlrtluJ}
} | We train small transformers to calculate the greatest common divisor (GCD) of two positive integers, and show that their predictions are fully explainable.
During training, models learn a list $\mathcal D$ of divisors, and predict the largest element of $\mathcal D$ that divides both inputs.
We also show that training distributions have a large impact on performance. Models trained from uniform operands only learn a handful of GCD (up to $38$ out of $100$).
Training from log-uniform operands boosts performance to $73$ correct GCD, and training from a log-uniform distribution of GCD to $91$. | Learning the greatest divisor - Explainable predictions in transformers | [
"Francois Charton"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=5hZTBUtkeh | @inproceedings{
paster2023openwebmath,
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=5hZTBUtkeh}
} | There is growing evidence that pretraining on high quality, carefully thought-out tokens such as code or mathematics plays an important role in improving the reasoning abilities of large language models. For example, Minerva, a PaLM model finetuned on billions of tokens of mathematical documents from arXiv and the web, reported dramatically improved performance on problems that require quantitative reasoning. However, because all known open source web datasets employ preprocessing that does not faithfully preserve mathematical notation, the benefits of large scale training on quantitive web documents are unavailable to the research community. We introduce OpenWebMath, an open dataset inspired by these works containing 14.7B tokens of mathematical webpages from Common Crawl. We describe in detail our method for extracting text and LaTeX content and removing boilerplate from HTML documents, as well as our methods for quality filtering and deduplication. Additionally, we run small-scale experiments by training 1.4B parameter language models on OpenWebMath, showing that models trained on 14.7B tokens of our dataset surpass the performance of models trained on over 20x the amount of general language data. We hope that our dataset, openly released on the Hugging Face Hub, will help spur advances in the reasoning abilities of large language models. | OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text | [
"Keiran Paster",
"Marco Dos Santos",
"Zhangir Azerbayev",
"Jimmy Ba"
] | Workshop/MATH-AI | 2310.06786 | [
"https://github.com/keirp/OpenWebMath"
] | https://huggingface.co/papers/2310.06786 | 2 | 3 | 0 | 4 | [
"mllmTeam/PhoneLM-0.5B",
"mllmTeam/PhoneLM-1.5B"
] | [
"open-web-math/open-web-math",
"EleutherAI/proof-pile-2",
"SciPhi/AgentSearch-V1",
"xavierdurawa/proof-pile-2-streaming",
"Alignment-Lab-AI/Open-Web-Math",
"BEE-spoke-data/open-web-math-minhash"
] | [] | [
"mllmTeam/PhoneLM-0.5B",
"mllmTeam/PhoneLM-1.5B"
] | [
"open-web-math/open-web-math",
"EleutherAI/proof-pile-2",
"SciPhi/AgentSearch-V1",
"xavierdurawa/proof-pile-2-streaming",
"Alignment-Lab-AI/Open-Web-Math",
"BEE-spoke-data/open-web-math-minhash"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=4c6s9is9DV | @inproceedings{
bin2023solving,
title={Solving Math Word Problems with Reexamination},
author={Yi Bin and WENHAO SHI and Yujuan Ding and Yang Yang and See-Kiong Ng},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=4c6s9is9DV}
} | Math word problem (MWP) solving aims to understand the descriptive math problem and calculate the result, for which previous efforts are mostly devoted to upgrade different technical modules. This paper brings a different and novel perspective of *reexamination process* during training by introducing a pseudo-dual task to enhance the MWP solving.
We propose a pseudo-dual (PseDual) learning scheme to model such process, which is model-agnostic thus can be adapted to any existing MWP solvers. The pseudo-dual task is specifically defined as filling the numbers in the expression back into the original word problem with numbers masked. To facilitate the effective joint learning of the two tasks, we further design a scheduled fusion strategy for the number infilling task, which smoothly switches the input from the ground-truth math expressions to the predicted ones. Our pseudo-dual learning scheme has been tested and proven effective when being equipped in several representative MWP solvers through empirical studies. *The codes and trained models are available at:* \url{https://github.com/steven640pixel/PsedualMWP}. | Solving Math Word Problems with Reexamination | [
"Yi Bin",
"WENHAO SHI",
"Yujuan Ding",
"Yang Yang",
"See-Kiong Ng"
] | Workshop/MATH-AI | 2310.09590 | [
"https://github.com/steven640pixel/psedualmwp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=1dD5cJCida | @inproceedings{
alfarano2023discovering,
title={Discovering Lyapunov functions with transformers},
author={Alberto Alfarano and Francois Charton and Amaury Hayat},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=1dD5cJCida}
} | We consider a long-standing open problem in mathematics: discovering the Lyapunov functions that control the global stability of dynamical systems. We propose a method for generating training data, and train sequence-to-sequence transformers to predict the Lyapunov functions of polynomial and non-polynomial systems with high accuracy. We also introduce a new baseline for this problem, and show that our models achieve state-of-the-art results, and outperform approximation based techniques and sum-of-square algorithmic routines. | Discovering Lyapunov functions with transformers | [
"Alberto Alfarano",
"Francois Charton",
"Amaury Hayat"
] | Workshop/MATH-AI | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0QHZrCWCH0 | @inproceedings{
azerbayev2023llemma,
title={Llemma: An Open Language Model For Mathematics},
author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Jiang and Jia Deng and Stella Biderman and Sean Welleck},
booktitle={The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=0QHZrCWCH0}
} | We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known openly released models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments. | Llemma: An Open Language Model For Mathematics | [
"Zhangir Azerbayev",
"Hailey Schoelkopf",
"Keiran Paster",
"Marco Dos Santos",
"Stephen McAleer",
"Albert Jiang",
"Jia Deng",
"Stella Biderman",
"Sean Welleck"
] | Workshop/MATH-AI | 2310.10631 | [
"https://github.com/EleutherAI/math-lm"
] | https://huggingface.co/papers/2310.10631 | 7 | 50 | 6 | 9 | [
"stabilityai/stable-code-3b",
"EleutherAI/llemma_7b",
"EleutherAI/llemma_34b",
"TheBloke/stable-code-3b-GGUF",
"TheBloke/llemma_7b-GGUF",
"meta-math/MetaMath-Llemma-7B",
"TheBloke/stable-code-3b-GPTQ",
"TheBloke/llemma_34b-GGUF",
"TheBloke/llemma_34b-AWQ",
"TheBloke/llemma_7b-AWQ",
"TheBloke/llemma_34b-GPTQ",
"osanseviero/stable-code-3b-Q2_K-GGUF",
"piercemaloney/llemma_7b",
"QuantFactory/stable-code-3b-GGUF",
"TheBloke/llemma_7b-GPTQ",
"TechxGenus/stable-code-3b-GPTQ",
"TechxGenus/stable-code-3b-AWQ",
"RichardErkhov/stabilityai_-_stable-code-3b-8bits",
"RichardErkhov/stabilityai_-_stable-code-3b-4bits",
"akswelh/NEOX"
] | [
"EleutherAI/proof-pile-2",
"xavierdurawa/proof-pile-2-streaming",
"xu3kev/proof-pile-2-proofsteps"
] | [
"bigcode/bigcode-models-leaderboard",
"prometheus-eval/BiGGen-Bench-Leaderboard",
"pdehaye/EleutherAI-llemma_34b",
"YANGSongsong/StableCodeDemo",
"alKoGolik/codellama-CodeLlama-7b-hf",
"Tonic/stablecode2",
"Tomoniai/Stablecode-Chat",
"CHLOzzz/EleutherAI-llemma_34b",
"Ralake/EleutherAI-llemma_34b",
"Brad212/EleutherAI-llemma_34b",
"razivo/EleutherAI-llemma_34b",
"R0KG/EleutherAI-llemma_34b",
"arkorerk/EleutherAI-llemma_34b",
"Soleup/EleutherAI-llemma_34b",
"hxllvh/EleutherAI-llemma_34b",
"MagnusHedegaard/EleutherAI-llemma_34b",
"mhovd/LLEMMA34B",
"algorithm6174/EleutherAI-llemma_34b",
"FreddieSpaghetti/EleutherAI-llemma_34b",
"mzlam/EleutherAI-llemma_34b",
"svsvs/EleutherAI-llemma_34b",
"Ralake/EleutherAI-llemma_7b",
"phiarchitect/EleutherAI-llemma_7b",
"splhadi/EleutherAI-llemma_7b",
"32133as/EleutherAI-llemma_7b",
"shekharp77/neural-sphere",
"Phoenixml/EleutherAI-llemma_7b",
"abdelaty/EleutherAI-llemma_7b",
"arkorerk/EleutherAI-llemma_7b",
"usercdp/EleutherAI-llemma_7b",
"deepbrain/EleutherAI-llemma_7b",
"bhaon/EleutherAI-llemma_7b",
"LuckyTheBest/EleutherAI-llemma_7b",
"madewithstone/EleutherAI-llemma_7b",
"KKrampis/MetaMath-Llemma-7B",
"sombochea/stcode-demo",
"HansenYan/stabilityai-stable-code-3b",
"Chris4K/stcode-demo",
"ColeGuion/myspaceee",
"zumwaltboi/stabilityai-stable-code-3b",
"spencert/stabilityai-stable-code-3b",
"Wakarimashita01/stabilityai-stable-code-3b",
"reshinthadith/CodeGen-Diversity",
"alKoGolik/asd",
"tanghe168/stabilityai-stable-code-3b",
"NotHeso/stabilityai-stable-code-3b",
"vemas/stabilityai-stable-code-3b",
"K00B404/codellama-CodeLlama-7b-hf"
] | [
"stabilityai/stable-code-3b",
"EleutherAI/llemma_7b",
"EleutherAI/llemma_34b",
"TheBloke/stable-code-3b-GGUF",
"TheBloke/llemma_7b-GGUF",
"meta-math/MetaMath-Llemma-7B",
"TheBloke/stable-code-3b-GPTQ",
"TheBloke/llemma_34b-GGUF",
"TheBloke/llemma_34b-AWQ",
"TheBloke/llemma_7b-AWQ",
"TheBloke/llemma_34b-GPTQ",
"osanseviero/stable-code-3b-Q2_K-GGUF",
"piercemaloney/llemma_7b",
"QuantFactory/stable-code-3b-GGUF",
"TheBloke/llemma_7b-GPTQ",
"TechxGenus/stable-code-3b-GPTQ",
"TechxGenus/stable-code-3b-AWQ",
"RichardErkhov/stabilityai_-_stable-code-3b-8bits",
"RichardErkhov/stabilityai_-_stable-code-3b-4bits",
"akswelh/NEOX"
] | [
"EleutherAI/proof-pile-2",
"xavierdurawa/proof-pile-2-streaming",
"xu3kev/proof-pile-2-proofsteps"
] | [
"bigcode/bigcode-models-leaderboard",
"prometheus-eval/BiGGen-Bench-Leaderboard",
"pdehaye/EleutherAI-llemma_34b",
"YANGSongsong/StableCodeDemo",
"alKoGolik/codellama-CodeLlama-7b-hf",
"Tonic/stablecode2",
"Tomoniai/Stablecode-Chat",
"CHLOzzz/EleutherAI-llemma_34b",
"Ralake/EleutherAI-llemma_34b",
"Brad212/EleutherAI-llemma_34b",
"razivo/EleutherAI-llemma_34b",
"R0KG/EleutherAI-llemma_34b",
"arkorerk/EleutherAI-llemma_34b",
"Soleup/EleutherAI-llemma_34b",
"hxllvh/EleutherAI-llemma_34b",
"MagnusHedegaard/EleutherAI-llemma_34b",
"mhovd/LLEMMA34B",
"algorithm6174/EleutherAI-llemma_34b",
"FreddieSpaghetti/EleutherAI-llemma_34b",
"mzlam/EleutherAI-llemma_34b",
"svsvs/EleutherAI-llemma_34b",
"Ralake/EleutherAI-llemma_7b",
"phiarchitect/EleutherAI-llemma_7b",
"splhadi/EleutherAI-llemma_7b",
"32133as/EleutherAI-llemma_7b",
"shekharp77/neural-sphere",
"Phoenixml/EleutherAI-llemma_7b",
"abdelaty/EleutherAI-llemma_7b",
"arkorerk/EleutherAI-llemma_7b",
"usercdp/EleutherAI-llemma_7b",
"deepbrain/EleutherAI-llemma_7b",
"bhaon/EleutherAI-llemma_7b",
"LuckyTheBest/EleutherAI-llemma_7b",
"madewithstone/EleutherAI-llemma_7b",
"KKrampis/MetaMath-Llemma-7B",
"sombochea/stcode-demo",
"HansenYan/stabilityai-stable-code-3b",
"Chris4K/stcode-demo",
"ColeGuion/myspaceee",
"zumwaltboi/stabilityai-stable-code-3b",
"spencert/stabilityai-stable-code-3b",
"Wakarimashita01/stabilityai-stable-code-3b",
"reshinthadith/CodeGen-Diversity",
"alKoGolik/asd",
"tanghe168/stabilityai-stable-code-3b",
"NotHeso/stabilityai-stable-code-3b",
"vemas/stabilityai-stable-code-3b",
"K00B404/codellama-CodeLlama-7b-hf"
] | 1 | poster |
null | https://openreview.net/forum?id=zN0XIPH1WG | @inproceedings{
celik2023reinforcement,
title={Reinforcement Learning of Diverse Skills using Mixture of Deep Experts},
author={Onur Celik and Aleksandar Taranovic and Gerhard Neumann},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=zN0XIPH1WG}
} | Agents that can acquire diverse skills to solve the same task have a benefit over other agents if e.g. unexpected environmental changes occur.
However, Reinforcement Learning (RL) policies mainly rely on Gaussian parameterization, preventing them from learning multi-modal, diverse skills. In this work, we propose a novel RL approach for training policies that exhibit diverse behavior. To this end, we propose a highly non-linear Mixture of Experts (MoE) as the policy representation, where each expert formalizes a skill as a contextual motion primitive. The context defines the task, which can be for instance the goal reaching position of the agent, or changing physical parameters like friction. Given a context, our trained policy first selects an expert out of the repertoire of skills and subsequently adapts the parameters of the contextual motion primitive.
To incentivize our policy to learn diverse skills, we leverage a maximum entropy objective combined with a per-expert context distribution that we optimize alongside each expert. The per-expert context distribution allows each expert to focus on a context sub-space and boost learning speed. However, these distributions need to be able to represent multi-modality and hard discontinuities in the environment's context probability space. We solve these requirements by leveraging energy-based models to represent the per-expert context distributions and show how we can efficiently train them using the standard policy gradient objective. | Reinforcement Learning of Diverse Skills using Mixture of Deep Experts | [
"Onur Celik",
"Aleksandar Taranovic",
"Gerhard Neumann"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xALDC4aHGz | @inproceedings{
nikulin2023xlandminigrid,
title={{XL}and-MiniGrid: Scalable Meta-Reinforcement Learning Environments in {JAX}},
author={Alexander Nikulin and Vladislav Kurenkov and Ilya Zisman and Viacheslav Sinii and Artem Agarkov and Sergey Kolesnikov},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=xALDC4aHGz}
} | We present XLand-Minigrid, a suite of tools and grid-world environments for meta-reinforcement learning research inspired by the diversity and depth of XLand and the simplicity and minimalism of MiniGrid. XLand-Minigrid is written in JAX, designed to be highly scalable, and can potentially run on GPU or TPU accelerators, democratizing large-scale experimentation with limited resources. To demonstrate the generality of our library, we have implemented some well-known single-task environments as well as new meta-learning environments capable of generating $10^8$ distinct tasks. We have empirically shown that the proposed environments can scale up to $2^{13}$ parallel instances on the GPU, reaching tens of millions of steps per second. | XLand-MiniGrid: Scalable Meta-Reinforcement Learning Environments in JAX | [
"Alexander Nikulin",
"Vladislav Kurenkov",
"Ilya Zisman",
"Viacheslav Sinii",
"Artem Agarkov",
"Sergey Kolesnikov"
] | Workshop/IMOL | 2312.12044 | [
"https://github.com/corl-team/xland-minigrid"
] | https://huggingface.co/papers/2312.12044 | 4 | 4 | 1 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=tf8R7qX46e | @inproceedings{
nguyen2023progressively,
title={Progressively Efficient Communication},
author={Khanh Nguyen and Ruijie Zheng and Hal Daum{\'e} III and Furong Huang and Karthik Narasimhan},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=tf8R7qX46e}
} | Assistant AI agents should be capable of rapidly acquiring novel skills and adapting
to new user preferences. Traditional frameworks like imitation learning and
reinforcement learning do not facilitate this capability because they support only
low-level, inefficient forms of communication. In contrast, humans communicate
with progressive efficiency by defining and sharing abstract intentions. Reproducing
similar capability in AI agents, we develop a novel learning framework named
Communication-Efficient Interactive Learning (CEIL). By equipping a learning
agent with an abstract, dynamic language and an intrinsic motivation to learn
with minimal communication effort, CEIL leads to emergence of a human-like
pattern where the learner and the teacher communicate progressively efficiently by
exchanging increasingly more abstract intentions. CEIL demonstrates impressive
performance and communication efficiency in a 2D MineCraft domain featuring
long-horizon decision-making tasks. Agents trained with CEIL quickly master
new tasks, outperforming non-hierarchical and hierarchical imitation learning by
up to 50% and 20% in absolute success rate, respectively, given the same number
of interactions with the teacher. Especially, the framework performs robustly with
teachers modeled after human pragmatic communication behavior. | Progressively Efficient Communication | [
"Khanh Nguyen",
"Ruijie Zheng",
"Hal Daumé III",
"Furong Huang",
"Karthik Narasimhan"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=scD9otIPoF | @inproceedings{
wang2023towards,
title={Towards a General Framework for Continual Learning with Pre-training},
author={Liyuan Wang and Jingyi Xie and Xingxing Zhang and Hang Su and Jun Zhu},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=scD9otIPoF}
} | In this work, we present a general framework for continual learning of sequentially arrived tasks with the use of pre-training, which has emerged as a promising direction for artificial intelligence systems to accommodate real-world dynamics.
From a theoretical perspective, we decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction. Then we propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics. We empirically demonstrate the superiority and generality of our approach in downstream continual learning, and further explore the applicability of PEFT techniques in upstream continual learning. We expect this to provide an important technical foundation for intrinsically motivated open-ended learning. | Towards a General Framework for Continual Learning with Pre-training | [
"Liyuan Wang",
"Jingyi Xie",
"Xingxing Zhang",
"Hang Su",
"Jun Zhu"
] | Workshop/IMOL | 2310.13888 | [
"https://github.com/thu-ml/hide-prompt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=qminWfnGaA | @inproceedings{
doyle2023intrinsically,
title={Intrinsically Motivated Social Play in Virtual Infants},
author={Chris Doyle and Sarah Shader and Michelle Lau and Megumi Sano and Daniel Yamins and Nick Haber},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=qminWfnGaA}
} | Infants explore their complex physical and social environment in an organized way. To gain insight into what intrinsic motivations may help structure this exploration, we create a virtual infant agent and place it in a developmentally-inspired 3D environment with no external rewards. The environment has a virtual caregiver agent with the capability to interact contingently with the infant agent in ways that resemble play. We test intrinsic reward functions that are similar to motivations that have been proposed to drive exploration in humans: surprise, uncertainty, novelty, and learning progress. The reward functions that are proxies for novelty and uncertainty are the most successful in generating diverse experiences and activating the environment contingencies. We also find that learning a world model in the presence of an attentive caregiver helps the infant agent learn how to predict scenarios with challenging social and physical dynamics. Our findings provide insight into how curiosity-like intrinsic rewards and contingent social interaction lead to social behavior and the creation of a robust predictive world model. | Intrinsically Motivated Social Play in Virtual Infants | [
"Chris Doyle",
"Sarah Shader",
"Michelle Lau",
"Megumi Sano",
"Daniel Yamins",
"Nick Haber"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=nfx5IutEed | @inproceedings{
wang2023voyager,
title={Voyager: An Open-Ended Embodied Agent with Large Language Models},
author={Guanzhi Wang and Yuqi Xie and Yunfan Jiang and Ajay Mandlekar and Chaowei Xiao and Yuke Zhu and Linxi Fan and Anima Anandkumar},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=nfx5IutEed}
} | We introduce Voyager, the first LLM-powered embodied lifelong learning agent in an open-ended world that continuously explores, acquires diverse skills, and makes novel discoveries without human intervention in Minecraft. Voyager consists of three key components: 1) an automatic curriculum that maximizes exploration, 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent’s capability rapidly and alleviates catastrophic forgetting. Empirically, Voyager demonstrates strong in-context lifelong learning capabilities. It outperforms prior SOTA by obtaining 3.1x more unique items, unlocking tech tree milestones up to 15.3x faster, and traveling 2.3x longer distances. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize. | Voyager: An Open-Ended Embodied Agent with Large Language Models | [
"Guanzhi Wang",
"Yuqi Xie",
"Yunfan Jiang",
"Ajay Mandlekar",
"Chaowei Xiao",
"Yuke Zhu",
"Linxi Fan",
"Anima Anandkumar"
] | Workshop/IMOL | 2305.16291 | [
"https://github.com/MineDojo/Voyager"
] | https://huggingface.co/papers/2305.16291 | 4 | 9 | 4 | 8 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=iaib9N3iB8 | @inproceedings{
freire2023highfidelity,
title={High-fidelity social learning via shared episodic memories can improve collaborative foraging},
author={Ismael Freire and Paul Verschure},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=iaib9N3iB8}
} | Social learning, a cornerstone of cultural evolution, allows individuals to acquire knowledge by observing and imitating others. Central to its efficacy is episodic memory, which records specific behavioral sequences to facilitate learning. This study examines the interrelation between social learning and episodic memory in the context of collaborative foraging. Specifically, we examine how variations in the frequency and fidelity of social learning impact collaborative foraging, and how the length of behavioral sequences preserved in agents’ episodic memory modulates these factors. To this end, we deploy Sequential Episodic Control agents capable of sharing among them behavioral sequences stored in their episodic memories. Our findings indicate that high-frequency, high-fidelity social learning promotes more distributed and efficient resource collection, a benefit that remains consistent regardless of the length of the shared episodic memories. In contrast, low-fidelity social learning shows no advantages over non-social learning in terms of resource acquisition. In addition, storing and disseminating longer episodic memories contribute to enhanced performance up to a certain threshold, beyond which increased memory capacity does not yield further benefits. Our findings emphasize the crucial role of high-fidelity social learning in collaborative foraging, and illuminate the intricate relationship between episodic memory capacity and the quality and frequency of social learning. This work aims to highlight the potential of neuro-computational models like episodic control algorithms in understanding social learning and offers a new perspective for investigating the cognitive mechanisms underlying open-ended cultural evolution. | High-fidelity social learning via shared episodic memories can improve collaborative foraging | [
"Ismael Freire",
"Paul Verschure"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=YYndPojV26 | @inproceedings{
lee2023imprinting,
title={Imprinting in autonomous artificial agents using deep reinforcement learning},
author={Donsuk Lee and Samantha Wood and Justin Wood},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=YYndPojV26}
} | Imprinting is a common survival strategy in which an animal learns a lasting preference for its parents and siblings early in life. To date, however, the origins and computational foundations of imprinting have not been formally established. What learning mechanisms generate imprinting behavior in newborn animals? Here, we used deep reinforcement learning and intrinsic motivation (curiosity), two learning mechanisms deeply rooted in psychology and neuroscience, to build autonomous artificial agents that imprint. When we raised our artificial agents together in the same environment, akin to the early social experiences of newborn animals, the agents spontaneously developed imprinting behavior. Our results provide a pixels-to-actions computational model of animal imprinting. We show that domain-general learning mechanisms—deep reinforcement learning and intrinsic motivation—are sufficient for embodied agents to rapidly learn core social behaviors from unsupervised natural experience. | Imprinting in autonomous artificial agents using deep reinforcement learning | [
"Donsuk Lee",
"Samantha Wood",
"Justin Wood"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=WX0vdLEG0q | @inproceedings{
kauvar2023neurobehavior,
title={Neurobehavior of exploring {AI} agents},
author={Isaac Kauvar and Chris Doyle and Nick Haber},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=WX0vdLEG0q}
} | We study intrinsically motivated exploration by artificially intelligent (AI) agents in animal-inspired settings. We construct virtual environments that are 3D, vision-based, physics-simulated, and based on two established animal assays: labyrinth exploration, and novel object interaction. We assess Plan2Explore (P2E), a leading model-based, intrinsically motivated deep reinforcement learning agent, in these environments. We characterize and compare the behavior of the AI agents to animal behavior, using measures devised for animal neuroethology. P2E exhibits some similarities to animal behavior, but is dramatically less efficient than mice at labyrinth exploration. We further characterize the neural dynamics associated with world modeling in the novel-object assay. We identify latent neural population activity axes linearly associated with representing object proximity. These results identify areas of improvement for existing AI agents, and make strides toward understanding the learned neural dynamics that guide their behavior. | Neurobehavior of exploring AI agents | [
"Isaac Kauvar",
"Chris Doyle",
"Nick Haber"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=UawMOafW3f | @inproceedings{
zadem2023emergence,
title={Emergence of a Symbolic Goal Representation with an Intelligent Tutoring System based on Intrinsic Motivation},
author={Mehdi Zadem and Sergio Mover and Sao Mai Nguyen},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=UawMOafW3f}
} | Goal representation affects the performance of Hierarchical Reinforcement Learning (HRL) algorithms by decomposing complex problems into easier subtasks. Recent studies show that representations that preserve temporally abstract environment dynamics are successful in solving difficult problems with theoretical guarantees for optimality. These methods however cannot scale to tasks where environment dynamics increase in complexity. On the other hand, other efforts have tried to use spatial abstraction to mitigate the previous issues. Their limitations include scalability to high dimensional environments and dependency on prior knowledge.
In this work, we propose a novel three-layer HRL algorithm that introduces, at different levels of the hierarchy, both a spatial and a temporal goal abstraction. We provide a theoretical study of the regret bounds of the learned policies. We evaluate the approach on complex continuous control tasks, demonstrating the effectiveness of spatial and temporal abstractions learned by this approach. | Emergence of a Symbolic Goal Representation with an Intelligent Tutoring System based on Intrinsic Motivation | [
"Mehdi Zadem",
"Sergio Mover",
"Sao Mai Nguyen"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=UKb6aHxs1f | @inproceedings{
du2023what,
title={What can {AI} Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration},
author={Yuqing Du and Eliza Kosoy and Alyssa Dayan and Maria Rufova and Pieter Abbeel and Alison Gopnik},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=UKb6aHxs1f}
} | What drives exploration? Understanding intrinsic motivation is a long-standing question in both cognitive science and artificial intelligence (AI); numerous exploration objectives have been proposed and tested in human experiments and used to train reinforcement learning (RL) agents. However, experiments in the former are often in simplistic environments that do not capture the complexity of real world exploration. On the other hand, experiments in the latter use more complex environments, yet the trained RL agents fail to come close to human exploration efficiency. To study this gap, we propose a framework for directly comparing human and agent exploration in an open-ended environment, Crafter. We study how well commonly-proposed information theoretic objectives for intrinsic motivation relate to actual human and agent behaviours, finding that human exploration consistently shows a significant positive correlation with Entropy, Information Gain, and Empowerment. Surprisingly, we find that intrinsically-motivated RL agent exploration does not show the same significant correlation consistently, despite being designed to optimize objectives that approximate Entropy or Information Gain. In a preliminary analysis of verbalizations, we find that children's verbalizations of goals positively correlates strongly with Empowerment, suggesting that goal-setting may be an important aspect of efficient exploration. | What can AI Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration | [
"Yuqing Du",
"Eliza Kosoy",
"Alyssa Dayan",
"Maria Rufova",
"Pieter Abbeel",
"Alison Gopnik"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=SvPeUEf67f | @inproceedings{
hwang2023neuroinspired,
title={Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity},
author={Jaedong Hwang and Zhang-Wei Hong and Eric Chen and Akhilan Boopathy and Pulkit Agrawal and Ila Fiete},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=SvPeUEf67f}
} | Intrinsic reward functions are widely used to improve exploration in reinforcement learning. We first examine the conditions and causes of catastrophic forgetting of the intrinsic reward function, and propose a new method, FARCuriosity, inspired by how humans and non-human animals learn. The method depends on fragmentation and recall: an agent fragments an environment based on surprisal signals, and uses different local curiosity modules (prediction-based intrinsic reward functions) for each fragment so that modules are not trained on the entire environment. At each fragmentation event, the agent stores the current module in long-term memory (LTM) and either initializes a new module or recalls a previously stored module based on its match with the current state. With fragmentation and recall, FARCuriosity achieves less forgetting and better overall performance in games with varied and heterogeneous environments in the Atari benchmark suite of tasks. Thus, this work highlights the problem of catastrophic forgetting in prediction-based curiosity methods and proposes a first solution. | Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity | [
"Jaedong Hwang",
"Zhang-Wei Hong",
"Eric Chen",
"Akhilan Boopathy",
"Pulkit Agrawal",
"Ila Fiete"
] | Workshop/IMOL | 2310.17537 | [
"https://github.com/fietelab/farcuriosity"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RoQbZRv1zw | @inproceedings{
ferraro2023focus,
title={{FOCUS}: Object-Centric World Models for Robotic Manipulation},
author={Stefano Ferraro and Pietro Mazzaglia and Tim Verbelen and Bart Dhoedt},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=RoQbZRv1zw}
} | Understanding the world in terms of objects and the possible interactions with them is an important cognition ability, especially in robotic manipulation. However, learning a structured world model that allows controlling the agent accurately remains a challenge. To address this, we propose FOCUS, a model-based agent that learns an object-centric world model. The learned representation makes it possible to provide the agent with an object-centric exploration mechanism, which encourages the agent to interact with objects and discover useful interactions. We apply FOCUS in several robotic manipulation settings where we show how our method fosters interactions such as reaching, moving, and rotating the objects in the environment. We further show how this ability to autonomously interact with objects can be used to quickly solve a given task using reinforcement learning with sparse rewards. | FOCUS: Object-Centric World Models for Robotic Manipulation | [
"Stefano Ferraro",
"Pietro Mazzaglia",
"Tim Verbelen",
"Bart Dhoedt"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RKGLAGayrt | @inproceedings{
oliveira2023deepthought,
title={DeepThought: an architecture for autonomous self-motivated systems},
author={Arlindo Oliveira and Tiago Domingos and Mario Figueiredo and Pedro Lima},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=RKGLAGayrt}
} | The ability of large language models (LLMs) to engage in credible dialogues with humans, taking into account the training data and the context of the conversation, has raised discussions about their ability to exhibit intrinsic motivations, agency, or even some degree of consciousness. We argue that the internal architecture of LLMs and their finite and volatile state cannot support any of these properties. By combining insights from complementary learning systems, global neuronal workspace, and attention schema theories, we propose to integrate LLMs and other deep learning systems into an architecture for cognitive language agents able to exhibit properties akin to agency, self-motivation, even some features of meta-cognition. | DeepThought: an architecture for autonomous self-motivated systems | [
"Arlindo Oliveira",
"Tiago Domingos",
"Mario Figueiredo",
"Pedro Lima"
] | Workshop/IMOL | 2311.08547 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Pqx2EED04F | @inproceedings{
sancaktar2023regularity,
title={Regularity as Intrinsic Reward for Free Play},
author={Cansu Sancaktar and Justus Piater and Georg Martius},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=Pqx2EED04F}
} | We propose regularity as a novel reward signal for intrinsically-motivated reinforcement learning. Taking inspiration from child development, we postulate that striving for structure and order helps guide exploration towards a subspace of tasks that are not favored by naive uncertainty-based intrinsic rewards. Our generalized formulation of Regularity as Intrinsic Reward (RaIR) allows us to operationalize it within model-based reinforcement learning. In a synthetic environment, we showcase the plethora of structured patterns that can emerge from pursuing this regularity objective. We also demonstrate the strength of our method in a multi-object robotic manipulation environment. We incorporate RaIR into free play and use it to complement the model’s epistemic uncertainty as an intrinsic reward. Doing so, we witness the autonomous construction of towers and other regular structures during free play, which leads to a substantial improvement in zero-shot downstream task performance on assembly tasks. | Regularity as Intrinsic Reward for Free Play | [
"Cansu Sancaktar",
"Justus Piater",
"Georg Martius"
] | Workshop/IMOL | 2312.01473 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=LhbsXSEfGQ | @inproceedings{
yiu2023children,
title={Children prioritize purely exploratory actions in observe-vs.-bet tasks},
author={Eunice Yiu and Kai Sandbrink and Alison Gopnik},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=LhbsXSEfGQ}
} | In reinforcement learning, agents often need to decide between selecting actions that are familiar and have previously yielded positive results (exploitation), and seeking new information that could allow them to uncover more effective actions (exploration). Understanding the specific kinds of heuristics and strategies that humans employ to solve this problem over the course of their development remains an open question in cognitive science and AI. In this study we develop an "observe or bet" task that separates "pure exploration” from "pure exploitation.” Participants have the option to either observe an instance of an outcome and receive no reward, or to bet on an action that is eventually rewarding, but offers no immediate feedback. We collected data from 56 five-to-seven-year-old children who completed the task at one of three different probability levels. We compared how children performed against both approximate solutions to the partially-observable Markov decision process and meta-RL models that were meta trained on the same decision making task across different probability levels. We found that the children observe significantly more than the two classes of algorithms. We then quantified how children’s policies differ between the different probability levels by fitting probabilistic programming models and by calculating the likelihood of the children’s actions under the task-driven model. The fitted parameters of the behavioral model as well as the direction of the deviation from neural network policies demonstrate that the primary way children change the frequency with which they bet on the door for which they have less evidence. This suggests both that children model the causal structure of the environment and that they produce a “hedging behavior” that would be impossible to detect in standard bandit tasks, and that reduces variance in overall rewards. The results shed light on how children reason about reward and information, providing a developmental benchmark that can help shape our understanding of both human behavior and RL neural network models. | Children prioritize purely exploratory actions in observe-vs.-bet tasks | [
"Eunice Yiu",
"Kai Sandbrink",
"Alison Gopnik"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Lf8rlcpfFV | @inproceedings{
adeniji2023skillbased,
title={Skill-Based Reinforcement Learning with Intrinsic Reward Matching},
author={Ademi Adeniji and Amber Xie and Pieter Abbeel},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=Lf8rlcpfFV}
} | While unsupervised skill discovery has shown promise in autonomously acquiring behavioral primitives, there is still a large methodological disconnect between task-agnostic skill pretraining and downstream, task-aware finetuning. We present Intrinsic Reward Matching (IRM), which unifies these two phases of learning via the $\textit{skill discriminator}$, a pretraining model component often discarded during finetuning. Conventional approaches finetune pretrained agents directly at the policy level, often relying on expensive environment rollouts to empirically determine the optimal skill. However, often the most concise yet complete description of a task is the reward function itself, and skill learning methods learn an $\textit{intrinsic}$ reward function via the discriminator that corresponds to the skill policy. We propose to leverage the skill discriminator to $\textit{match}$ the intrinsic and downstream task rewards and determine the optimal skill for an unseen task without environment samples on a Fetch tabletop manipulation task suite. | Skill-Based Reinforcement Learning with Intrinsic Reward Matching | [
"Ademi Adeniji",
"Amber Xie",
"Pieter Abbeel"
] | Workshop/IMOL | 2210.07426 | [
"https://github.com/ademiadeniji/irm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=JYHcnyipNG | @inproceedings{
dahmani2023from,
title={From Child's Play to {AI}: Insights into Automated Causal Curriculum Learning},
author={Annya Dahmani and Eunice Yiu and Tabitha Lee and Nan Ke and Oliver Kroemer and Alison Gopnik},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=JYHcnyipNG}
} | We study how reinforcement learning algorithms and children develop a causal curriculum to achieve a challenging goal that is not solvable at first. Adopting the Procgen environments that include various challenging tasks, we found that 5- to 7-year-old children actively used their current level competence to determine their next step in the curriculum and made improvements to their performance during this process as a result. This suggests that children treat their level competence as an intrinsic reward, and are motivated to master easier levels in order to do better at the more difficult one, even without explicit reward. To evaluate RL agents, we exposed them to the same demanding Procgen environments as children and employed several curriculum learning methodologies. Our results demonstrate that RL agents that emulate children by incorporating level competence as an additional reward signal exhibit greater stability and are more likely to converge during training, compared to RL agents that are solely reliant on extrinsic reward signals for game-solving. Curriculum learning may also offer a significant reduction in the number of frames needed to solve a target environment. Taken together, our human-inspired findings suggest a potential path forward for addressing catastrophic forgetting or domain shift during curriculum learning in RL agents. | From Child's Play to AI: Insights into Automated Causal Curriculum Learning | [
"Annya Dahmani",
"Eunice Yiu",
"Tabitha Lee",
"Nan Ke",
"Oliver Kroemer",
"Alison Gopnik"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IAX12qUjYk | @inproceedings{
zheng2023stabilizing,
title={Stabilizing Contrastive {RL}: Techniques for Robotic Goal Reaching from Offline Data},
author={Chongyi Zheng and Benjamin Eysenbach and Homer Walke and Patrick Yin and Kuan Fang and Ruslan Salakhutdinov and Sergey Levine},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=IAX12qUjYk}
} | Robotic systems that rely primarily on self-supervised learning have the potential to decrease the amount of human annotation and engineering effort required to learn control strategies. In the same way that prior robotic systems have leveraged self-supervised techniques from computer vision (CV) and natural language processing (NLP), our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem: learning to reach any goal without human-specified rewards or labels. Despite the seeming appeal, little (if any) prior work has demonstrated how self-supervised RL methods can be practically deployed on robotic systems. By first studying a challenging simulated version of this task, we discover design decisions about architectures and hyperparameters that increase the success rate by $2 \times$. These findings lay the groundwork for our main result: we demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks, with tasks being specified by a single goal image provided after training. | Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data | [
"Chongyi Zheng",
"Benjamin Eysenbach",
"Homer Walke",
"Patrick Yin",
"Kuan Fang",
"Ruslan Salakhutdinov",
"Sergey Levine"
] | Workshop/IMOL | 2306.03346 | [
"https://github.com/chongyi-zheng/stable_contrastive_rl"
] | https://huggingface.co/papers/2306.03346 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=HD18lHluSR | @inproceedings{
ge2023enhancing,
title={Enhancing Understanding in Generative Agents through Active Inquiring},
author={Jiaxin Ge and Kaiya Zhao and Manuel Cortes and Jovana Kondic and Shuying Luo and Michelangelo Naim and Andrew Ahn and Guangyu Robert Yang},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=HD18lHluSR}
} | As artificial intelligence advances, Large Language Models (LLMs) have evolved beyond being just tools, becoming more like human-like agents that can converse, reflect, plan, and set goals. However, these models still struggle with open-ended question answering and often fail to understand unfamiliar scenarios quickly. To address this, we ask: how do humans manage strange situations so effectively? We believe it’s largely due to our natural instinct for curiosity and a built-in desire to predict the future and seek explanations when those predictions don’t align with reality. Unlike humans, LLMs typically accept information passively without an inherent desire to question or doubt, which could be why they struggle to understand new situations.
Focusing on this, our study explores the possibility of equipping LLM-agents with human-like curiosity. Can these models move from being passive processors to active seekers of understanding, reflecting human behaviors? And can this adaptation benefit them as it does humans? To explore this, we introduce an innovative experimental framework where generative agents navigate through strange and unfamiliar situations, and their understanding is then assessed through interview questions about those situations. Initial results show notable improvements when models are equipped with traits of surprise and inquiry compared to those without. This research is a step towards creating more human-like agents and highlights the potential benefits of integrating human-like traits in models. | Enhancing Understanding in Generative Agents through Active Inquiring | [
"Jiaxin Ge",
"Kaiya Zhao",
"Manuel Cortes",
"Jovana Kondic",
"Shuying Luo",
"Michelangelo Naim",
"Andrew Ahn",
"Guangyu Robert Yang"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GgzfIBxa18 | @inproceedings{
teodorescu2023codeplay,
title={Codeplay: Autotelic Learning through Collaborative Self-Play in Programming Environments},
author={Laetitia Teodorescu and C{\'e}dric Colas and Matthew Bowers and Thomas Carta and Pierre-Yves Oudeyer},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=GgzfIBxa18}
} | Autotelic learning is the training setup where agents learn by setting their own goals and trying to achieve them. However, creatively generating freeform goals is challenging for autotelic agents. We present Codeplay, an algorithm casting autotelic learning as a game between a Setter agent and a Solver agent, where the Setter generates programming puzzles of appropriate difficulty and novelty for the solver and the Solver learns to achieve them. Early experiments with the Setter demonstrates one can effectively control the tradeoff between difficulty of a puzzle and its novelty by tuning the reward of the Setter, a code language model finetuned with deep reinforcement learning. | Codeplay: Autotelic Learning through Collaborative Self-Play in Programming Environments | [
"Laetitia Teodorescu",
"Cédric Colas",
"Matthew Bowers",
"Thomas Carta",
"Pierre-Yves Oudeyer"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GGCb8ZA9Za | @inproceedings{
cheng2023learning,
title={Learning Diverse Skills for Local Navigation under Multi-constraint Optimality},
author={Jin Cheng and Marin Vlastelica and Pavel Kolev and Chenhao Li and Georg Martius},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=GGCb8ZA9Za}
} | Despite many successful applications of data-driven control in robotics, extracting meaningful diverse behaviors remains a challenge. Typically, task performance needs to be compromised in order to achieve diversity. In many scenarios, task requirements are specified as a multitude of reward terms, each requiring a different trade-off. In this work, we take a constrained optimization viewpoint on the quality-diversity trade-off and show that we can obtain diverse policies while imposing constraints on their value functions which are defined through distinct rewards. In line with previous work, further control of the diversity level can be achieved through an attract-repel reward term motivated by the Van der Waals force. We demonstrate the effectiveness of our method on a local navigation task where a quadruped robot needs to reach the target within a finite horizon. Finally, our trained policies transfer well to the real 12-DoF quadruped robot, Solo12, and exhibit diverse agile behaviors with successful obstacle traversal. | Learning Diverse Skills for Local Navigation under Multi-constraint Optimality | [
"Jin Cheng",
"Marin Vlastelica",
"Pavel Kolev",
"Chenhao Li",
"Georg Martius"
] | Workshop/IMOL | 2310.02440 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=E0LWTN1xPX | @inproceedings{
hugessen2023surpriseadaptive,
title={Surprise-Adaptive Intrinsic Motivation for Unsupervised Reinforcement Learning},
author={Adriana Hugessen and Roger Creus Castanyer and Glen Berseth},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=E0LWTN1xPX}
} | Both surprise-minimizing and surprise-maximizing (curiosity) objectives for unsupervised reinforcement learning (RL) have been shown to be effective in different environments, depending on the environment's level of natural entropy. However, neither method can perform well across all entropy regimes. In an effort to find a single surprise-based method that will encourage emergent behaviors in any environment, we propose an agent that can adapt its objective depending on the entropy conditions in its environment by framing the choice as a multi-armed bandit problem. We devise a novel intrinsic feedback signal for the bandit, which captures the agent's ability to control the entropy in its environment. We demonstrate that such agents can learn to control entropy and exhibit emergent behaviors in both high- and low-entropy regimes. | Surprise-Adaptive Intrinsic Motivation for Unsupervised Reinforcement Learning | [
"Adriana Hugessen",
"Roger Creus Castanyer",
"Glen Berseth"
] | Workshop/IMOL | 2405.17243 | [
"https://github.com/roger-creus/surprise-adaptive-agents"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=DY5RK2ZsPn | @inproceedings{
raz2023modeling,
title={Modeling habituation in infants and adults using rational curiosity over perceptual embeddings},
author={Gal Raz and Anjie Cao and Rebecca Saxe and Michael Frank},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=DY5RK2ZsPn}
} | From birth, human infants engage in intrinsically motivated, open-ended learning, mainly by deciding what to attend to and for how long. Yet, existing formal models of the drivers of looking are very limited in scope. To address this, we present a new version of the Rational Action, Noisy Choice for Habituation (RANCH) model. This version of RANCH is a stimulus-computable, rational learning model that decides how long to look at sequences of stimuli based on expected information gain (EIG). The model captures key patterns of looking time documented in the literature, habituation and dishabituation. We evaluate RANCH quantitatively using large datasets from adult and infant looking time experiments. We argue that looking time in our experiments is well described by RANCH, and that RANCH is a general, interpretable and modifiable framework for the rational analyses of intrinsically motivated learning by looking. | Modeling habituation in infants and adults using rational curiosity over perceptual embeddings | [
"Gal Raz",
"Anjie Cao",
"Rebecca Saxe",
"Michael Frank"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=80AIhHwAdw | @inproceedings{
davidson2023generating,
title={Generating Human-Like Goals by Synthesizing Reward-Producing Programs},
author={Guy Davidson and Graham Todd and Todd Gureckis and Julian Togelius and Brenden Lake},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=80AIhHwAdw}
} | Humans show a remarkable capacity to generate novel goals, for learning and play alike, and modeling this human capacity would be a valuable step toward more generally-capable artificial agents. We describe a computational model for generating novel human-like goals represented in a domain-specific language (DSL). We learn a ‘human-likeness’ fitness function over expressions in this DSL from a small (<100 game) human dataset collected in an online experiment. We then use a Quality-Diversity (QD) approach to generate a variety of human-like games with different characteristics and high fitness. We demonstrate that our method can generate synthetic games that are syntactically coherent under the DSL, semantically sensible with respect to environmental objects and their affordances, but distinct from human games in the training set. We discuss key components of our model and its current shortcomings, in the hope that this work helps inspire progress toward self-directed agents with human-like goals. | Generating Human-Like Goals by Synthesizing Reward-Producing Programs | [
"Guy Davidson",
"Graham Todd",
"Todd Gureckis",
"Julian Togelius",
"Brenden Lake"
] | Workshop/IMOL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4w6a2QrGws | @inproceedings{
ma2023generative,
title={Generative Intrinsic Optimization: Intrinsic Control with Model Learning},
author={Jianfei Ma},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=4w6a2QrGws}
} | Future sequence represents the outcome after executing the action into the environment (i.e. the trajectory onwards). When driven by the information-theoretic concept of mutual information, it seeks maximally informative consequences. Explicit outcomes may vary across state, return, or trajectory serving different purposes such as credit assignment or imitation learning. However, the inherent nature of incorporating intrinsic motivation with reward maximization is often neglected. In this work, we propose a policy iteration scheme that seamlessly incorporates the mutual information, ensuring convergence to the optimal policy. Concurrently, a variational approach is introduced, which jointly learns the necessary quantity for estimating the mutual information and the dynamics model, providing a general framework for incorporating different forms of outcomes of interest. While we mainly focus on theoretical analysis, our approach opens the possibilities of leveraging intrinsic control with model learning to enhance sample efficiency and incorporate uncertainty of the environment into decision-making. | Generative Intrinsic Optimization: Intrinsic Control with Model Learning | [
"Jianfei Ma"
] | Workshop/IMOL | 2310.08100 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=4gYLottfsf | @inproceedings{
grand2023learning,
title={Learning Interpretable Libraries by Compressing and Documenting Code},
author={Gabriel Grand and Lionel Wong and Matthew Bowers and Theo X. Olausson and Muxin Liu and Joshua B. Tenenbaum and Jacob Andreas},
booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023},
year={2023},
url={https://openreview.net/forum?id=4gYLottfsf}
} | While large language models (LLMs) now excel at code generation, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs. In this paper, we introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code to build libraries tailored to particular problem domains. LILO combines LLM-guided program synthesis with recent algorithmic advances in automated refactoring from Stitch: a symbolic compression system that efficiently identifies optimal lambda abstractions across large code corpora. To make these abstractions interpretable, we introduce an auto-documentation (AutoDoc) procedure that infers natural language names and docstrings based on contextual examples of usage. In addition to improving human readability, we find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions. We evaluate LILO on three inductive program synthesis benchmarks for string editing, scene reasoning, and graphics composition. Compared to existing neural and symbolic methods—including the state-of-the-art library learning algorithm DreamCoder—LILO solves more complex tasks and learns richer libraries that are grounded in linguistic knowledge. | Learning Interpretable Libraries by Compressing and Documenting Code | [
"Gabriel Grand",
"Lionel Wong",
"Matthew Bowers",
"Theo X. Olausson",
"Muxin Liu",
"Joshua B. Tenenbaum",
"Jacob Andreas"
] | Workshop/IMOL | [
"https://github.com/gabegrand/lilo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vC2lHVahks | @inproceedings{
leshkowitz2023an,
title={An Information-Theoretic Approach to Cognitive Dimension Reduction},
author={Maya Leshkowitz},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=vC2lHVahks}
} | We introduce Cognitive Dimension Reduction (CDR), a model that sheds light on how individuals simplify the multidimensional world to guide decision-making and comprehension. Our proposal posits that cognitive limitations prompt the adoption of simplified models, reducing the environment to a subset of dimensions. Within these limitations, we propose that individuals exploit both environment structure and goal relevance. Employing Information Theory, we formalize these principles and develop a model that explains how environmental and cognitive factors influence dimension reduction. Furthermore, we present an experimental method for CDR assessment and initial findings that support it. | An Information-Theoretic Approach to Cognitive Dimension Reduction | [
"Maya Leshkowitz"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=v6UKrdxk5n | @inproceedings{
taylor-davies2023balancing,
title={Balancing utility and cognitive cost in social representation},
author={Max Taylor-Davies and Christopher Lucas},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=v6UKrdxk5n}
} | To successfully navigate its environment, an agent must construct and maintain representations of the other agents that it encounters. Such representations are useful for many tasks, but they are not without cost. As a result, agents must make decisions regarding how much information they choose to store about the agents in their environment. Using selective social learning as an example task, we motivate the problem of finding agent representations that optimally trade off between downstream utility and information cost, and illustrate two example approaches to resource-constrained social representation. | Balancing utility and cognitive cost in social representation | [
"Max Taylor-Davies",
"Christopher Lucas"
] | Workshop/InfoCog | 2310.04852 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=udEjq72DFO | @inproceedings{
he2023informationtheoretic,
title={Information-Theoretic Generalization Bounds for Deep Neural Networks},
author={Haiyun He and Christina Yu and Ziv Goldfeld},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=udEjq72DFO}
} | Deep neural networks (DNNs) exhibit an exceptional capacity for generalization in practical applications. This work aims to capture the effect and benefits of depth for learning within the paradigm of information-theoretic generalization bounds. We derive two novel hierarchical bounds on the generalization error that capture the effect of the internal representations within each layer. The first bound demonstrates that the generalization bound shrinks as the layer index of the internal representation increases. The second bound aims to quantify the contraction of the relevant information measures when moving deeper into the network. To achieve this, we leverage the strong data processing inequality (SDPI) and employ a stochastic approximation of the DNN model we can explicitly control the SDPI coefficient. These results provide a new perspective for understanding generalization in deep models. | Information-Theoretic Generalization Bounds for Deep Neural Networks | [
"Haiyun He",
"Christina Yu",
"Ziv Goldfeld"
] | Workshop/InfoCog | 2404.03176 | [
""
] | https://huggingface.co/papers/2404.03176 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=sE5sFNgVxe | @inproceedings{
weingarten2023a,
title={A Work in Progress: Tighter Bounds on the Information Bottleneck for Deep Learning},
author={Nir Weingarten and Moshe Butman and Ran Gilad-Bachrach},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=sE5sFNgVxe}
} | The field of Deep Neural Nets (DNNs) is still evolving and new architectures are emerging to better extract information from available data. The Information Bottleneck, IB, offers an optimal information theoretic framework for data modeling. However, IB is intractable in most settings. In recent years attempts were made to combine deep learning with IB both for optimization and to explain the inner workings of deep neural nets. VAE inspired variational approximations such as VIB became a popular method to approximate bounds on the required mutual information computations. This work continues this direction by introducing a new tractable variational upper bound for the IB functional which is empirically tighter than previous bounds. When used as an objective function it enhances the performance of previous IB-inspired DNNs in terms of test accuracy and robustness to adversarial attacks across several challenging tasks. Furthermore, the utilization of information theoretic tools allows us to analyze experiments and confirm theoretical predictions in real world problems. | A Work in Progress: Tighter Bounds on the Information Bottleneck for Deep Learning | [
"Nir Weingarten",
"Moshe Butman",
"Ran Gilad-Bachrach"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=r2CDuDonZY | @inproceedings{
hecht2023finding,
title={Finding Relevant Information in Saliency Related Neural Networks},
author={Ron Moshe Hecht and Gershon Celniker and Ronit Bustin and Dan Levi and Ariel Telpaz and Omer Tsimhoni and Ke Liu},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=r2CDuDonZY}
} | Over the last few years, many saliency models have shifted to using Deep Learning (DL). DL models can be viewed in this context as a double-edged sword. On the one hand, they boost estimation performance and on the other hand have less explanatory power than more explicit models. This drop in explanatory power is why DL models are often dubbed implicit models. Explainable AI (XAI) techniques have been formulated to address this shortfall. They work by extracting information from the network and explaining it. Here, we demonstrate the effectiveness of the Relevant Information Approach in accounting for saliency networks. We apply this approach to saliency models based on explicit algorithms when represented as neural networks. These networks are known to contain relevant information in their neurons. We estimate the relevant information of each neuron by capturing the relevant information with respect to first layer features (intensity, red, blue) and its higher-level manipulations. We measure relevant information by using Mutual Information (MI) between quantified features and the label. These experiments were conducted on a subset of the CAT2000 dataset. | Finding Relevant Information in Saliency Related Neural Networks | [
"Ron Moshe Hecht",
"Gershon Celniker",
"Ronit Bustin",
"Dan Levi",
"Ariel Telpaz",
"Omer Tsimhoni",
"Ke Liu"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=msQGnw5mtw | @inproceedings{
bonnasse-gahot2023information,
title={Information theoretic study of the neural geometry induced by category learning},
author={Laurent Bonnasse-Gahot and Jean-Pierre Nadal},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=msQGnw5mtw}
} | Categorization is an important topic both for biological and artificial neural networks. Here, we take an information theoretic approach to assess the efficiency of the representations induced by category learning. We show that one can decompose the relevant Bayesian cost into two components, one for the coding part and one for the decoding part. Minimizing the coding cost implies maximizing the mutual information between the set of categories and the neural activities. We analytically show that this mutual information can be written as the sum of two terms that can be interpreted as (i) finding an appropriate representation space, and, (ii) building a representation with the appropriate metrics, based on the neural Fisher information on this space. One main consequence is that category learning induces an expansion of neural space near decision boundaries. Finally, we provide numerical illustrations that show how Fisher information of the coding neural population aligns with the boundaries between categories. | Information theoretic study of the neural geometry induced by category learning | [
"Laurent Bonnasse-Gahot",
"Jean-Pierre Nadal"
] | Workshop/InfoCog | 2311.15682 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=iPdTZGVNkd | @inproceedings{
kinney2023lossy,
title={Lossy Compression and the Granularity of Causal Representation},
author={David Kinney and Tania Lombrozo},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=iPdTZGVNkd}
} | A given causal system can be represented in a variety of ways. How do agents determine which variables to include in their causal representations, and at what level of granularity? Using techniques from information theory, we develop a formal theory according to which causal representations reflect a trade-off between compression and informativeness. We then show, across three studies (N=1,391), that participants’ choices over causal models demonstrate a preference for more compressed causal models when all other factors are held fixed, with some further tolerance for lossy compressions. | Lossy Compression and the Granularity of Causal Representation | [
"David Kinney",
"Tania Lombrozo"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=hRr0fNqnAP | @inproceedings{
xu2023one,
title={One if by land, two if by sea, three if by four seas, and more to come: values of perception, prediction, communication, and common sense in decision making},
author={Aolin Xu},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=hRr0fNqnAP}
} | This work is about rigorously defining the values of perception, prediction, communication, and common sense in decision making. The defined quantities are decision-theoretic, but have information-theoretic analogues, e.g., they share some simple but key mathematical properties with Shannon entropy and mutual information, and can reduce to these quantities in particular settings. One interesting observation is that, the value of perception without prediction can be negative, while the value of perception together with prediction and the value of prediction alone are always nonnegative. The defined quantities suggest answers to practical questions arising in the design of autonomous decision-making systems. Example questions include: Do we need to observe and predict the behavior of a particular agent? How important is it? What is the best order to observe and predict the agents? The defined quantities may also provide insights to cognitive science and neural science, toward the understanding of how natural decision makers make use of information gained from different sources and operations. | One if by land, two if by sea, three if by four seas, and more to come: values of perception, prediction, communication, and common sense in decision making | [
"Aolin Xu"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=fpmnptci7d | @inproceedings{
mahon2023information,
title={Information Flows Reveal Computational Mechanisms of {RNN}s in Contextual Decision-making},
author={Miles Mahon and Praveen Venkatesh},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=fpmnptci7d}
} | Understanding the information flow of different task-relevant messages within recurrent circuits is crucial to comprehending how the brain works, and in turn, for diagnosing and treating brain disorders.
While several information flow methods have focused on functional connectivity and modalities of communication, we do not yet have a principled approach for understanding what information flows can tell us about the effects of causal interventions.
In this paper, we consider a measure called $M$-information flow, proposed by Venkatesh et al. (2020), within an artificial recurrent network trained on a contextual decision-making task studied by Mante et al. (2013).
We show that $M$-information flow recapitulates the dynamics of information integration, showing specialization of individual units, and revealing how context information is incorporated to select the appropriate response without affecting the underlying circuit dynamics.
We also show how $M$-information flow predicts the ``behavioral outcome'' of causal interventions within the network.
This leads us to believe that understanding $M$-information flow within a recurrent network can inform the design of intervention studies, and in future, of stimulation-based treatments for brain disorders. | Information Flows Reveal Computational Mechanisms of RNNs in Contextual Decision-making | [
"Miles Mahon",
"Praveen Venkatesh"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=euzT4OcP2m | @inproceedings{
freirich2023the,
title={The Distortion-Perception Tradeoff in Finite Channels with Arbitrary Distortion Measures},
author={Dror Freirich and Nir Weinberger and Ron Meir},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=euzT4OcP2m}
} | Whenever inspected by humans, reconstructed signals should not be distinguished from real ones. Typically, such a high perceptual quality comes at the price of high reconstruction error.
We study this distortion-perception (DP) tradeoff over finite-alphabet channels, for the Wasserstein-$1$ distance as the perception index, and an arbitrary distortion matrix. We show that computing the DP function and the optimal
reconstructions is equivalent to solving a set of linear programming problems. We prove that the DP curve is a piecewise linear function of the perception index, and derive a closed-form expression for the case of binary sources. | The Distortion-Perception Tradeoff in Finite Channels with Arbitrary Distortion Measures | [
"Dror Freirich",
"Nir Weinberger",
"Ron Meir"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eAE32Quqe7 | @inproceedings{
johnson2023decision,
title={Decision confidence reflects maximum entropy reinforcement learning},
author={Amelia Johnson and Michael Buice and Koosha Khalvati},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=eAE32Quqe7}
} | Current computational models have not been able to account for the effect of reward in confidence reports among humans. Here we propose a mathematical framework of confidence that is able to generalize across various decision making tasks involving varying prior and reward distributions. This framework proposes a formal definition of "decision confidence" through the concept of soft optimality. We further show that the objective function in this framework is jointly maximising the reward and information entropy of the policy. We confirm the validity of our framework by testing it on a data gathered under various task conditions. | Decision confidence reflects maximum entropy reinforcement learning | [
"Amelia Johnson",
"Michael Buice",
"Koosha Khalvati"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=cFzQdE6DA3 | @inproceedings{
nomura2023optimum,
title={Optimum Self-Random Number Generation Rate and Its Application to the Rate-Distortion-Perception-Problem},
author={Ryo Nomura},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=cFzQdE6DA3}
} | In this paper, we consider the rate-distortion-perception (RDP) problem with respect to $f$-divergences from the viewpoint of information-theoretic random number generation.
First, we address the self-random number generation problem, which is a subproblem of the RDP problem, and derive the general formula for the optimum achievable rate.
Then, we apply our findings to the RDP problem. | Optimum Self-Random Number Generation Rate and Its Application to the Rate-Distortion-Perception-Problem | [
"Ryo Nomura"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZgMRaX02ck | @inproceedings{
li2023aberrant,
title={Aberrant High-Order Dependencies in Schizophrenia Resting-State Functional {MRI} Networks},
author={Qiang LI and Vince Calhoun and Adithya Ram Ballem and Shujian Yu and Jesus Malo and Armin Iraji},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=ZgMRaX02ck}
} | The human brain has a complex, intricate functional architecture. While many studies primarily emphasize pairwise interactions, delving into high-order associations is crucial for a comprehensive understanding of how functional brain networks intricately interact beyond simple pairwise connections. Analyzing high-order statistics allows us to explore the nuanced and complex relationships across the brain, unraveling the heterogeneity and uncovering patterns of multilevel overlap on the psychosis continuum. Here, we employed high-order independent component analysis (ICA) plus multivariate information-theoretical metrics ($O$-information and $S$-information) to estimate high-order interaction to examine schizophrenia using resting-state fMRI. The results show that multiple brain regions networks may be altered in schizophrenia, such as temporal, subcortical, and higher-cognitive brain regions, and meanwhile, it also shows that revealed synergy gives more information than redundancy in diagnosing schizophrenia. All in all, we showed that high-order dependencies were altered in schizophrenia. Identification of these aberrant patterns will give us a new window to diagnose schizophrenia. | Aberrant High-Order Dependencies in Schizophrenia Resting-State Functional MRI Networks | [
"Qiang LI",
"Vince Calhoun",
"Adithya Ram Ballem",
"Shujian Yu",
"Jesus Malo",
"Armin Iraji"
] | Workshop/InfoCog | 2310.17445 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=UULIa4QFii | @inproceedings{
cohen2023the,
title={The Perception-Uncertainty Tradeoff in Generative Restoration Models},
author={Regev Cohen and Ehud Rivlin and Daniel Freedman},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=UULIa4QFii}
} | Generative models have achieved remarkable performance in restoration tasks, producing results nearly indistinguishable from real data. However, they are prone to generating artifacts or hallucinations not present in the original input, inducing estimation uncertainty. Notably, the extent of hallucination seems to increase with the perceptual quality of the generative model. This paper explores this phenomenon using information-theoretic tools to uncover an inherent tradeoff between perception and uncertainty. Our mathematical analysis shows that the uncertainty of the restoration algorithm, as measured by error entropy, grows in tandem with the improvement in perceptual quality. Employing R'enyi divergence as a perception measure, we derive lower and upper bounds for the tradeoff, locating estimators into distinct performance categories. Furthermore, we establish a relationship between estimation distortion and uncertainty, through which we provide a fresh perspective on the perception-distortion tradeoff. Our work presents a principled analysis of uncertainty, emphasizing its interplay with perception and distortion, and the limitations of generative models in restoration tasks. | The Perception-Uncertainty Tradeoff in Generative Restoration Models | [
"Regev Cohen",
"Ehud Rivlin",
"Daniel Freedman"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=Sr2mVydu4r | @inproceedings{
sergeant-perthuis2023influence,
title={Influence of the geometry of the feature space on curiosity based exploration},
author={Gr{\'e}goire Sergeant-Perthuis and Nils Ruet and David Rudrauf and Dimitri Ognibene and Yvain Tisserand},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=Sr2mVydu4r}
} | In human spatial awareness, information appears to be represented according to 3-D projective geometry. It structures information integration and action planning within an internal representation space.
The way different first person perspectives of an agent relate to each other, through transformations of a world model, defines a specific perception scheme for the agent. This collection of transformations makes a group and it characterizes a geometric space by acting on it. We propose that imbuing world models with a 'geometric' structure, given by a group acting on the space, is one way to capture different perception schemes of agents.
We explore how changing the geometric structure of a world model impacts the behavior of an agent. In particular, we focus on how such geometrical operations transform the formal expression of epistemic value (mutual information), a quantity known in active inference for driving an agent's curiosity about its environment, and the impact on exploration behaviors accordingly. We used group action as a special class of policies for perspective-dependent control. We compared the Euclidean versus projective groups. We formally demonstrate that the groups induce distinct behaviors. | Influence of the geometry of the feature space on curiosity based exploration | [
"Grégoire Sergeant-Perthuis",
"Nils Ruet",
"David Rudrauf",
"Dimitri Ognibene",
"Yvain Tisserand"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SosbRhZLBV | @inproceedings{
carenini2023large,
title={Large Language Models Behave (Almost) As Rational Speech Actors: Insights From Metaphor Understanding},
author={Gaia Carenini and Louis Bodot and Luca Bischetti and Walter Schaeken and Valentina Bambini},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=SosbRhZLBV}
} | What are the inner workings of large language models? Can they perform pragmatic inference? This paper attempts to characterize from a mathematical angle the processes of large language models involved in metaphor understanding. Specifically, we show that GPT2-XL model’s reasoning mechanisms can be well predicted within the Rational Speech Act framework for metaphor understanding, which has
already been used to grasp the principles of human pragmatic inference in dealing with figurative language. Our research contributes to the field of explainability and interpretability of large language models and highlights the usefulness of adopting a Bayesian model of human cognition to gain insights into the pragmatics of conversational agents. | Large Language Models Behave (Almost) As Rational Speech Actors: Insights From Metaphor Understanding | [
"Gaia Carenini",
"Louis Bodot",
"Luca Bischetti",
"Walter Schaeken",
"Valentina Bambini"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SeQG0GCtBh | @inproceedings{
bucher2023cognitive,
title={Cognitive Information Filters: Algorithmic Choice Architecture for Boundedly Rational Choosers},
author={Stefan Bucher and Peter Dayan},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=SeQG0GCtBh}
} | We introduce cognitive information filters as an algorithmic approach to mitigating information overload using choice architecture: We develop a rational inattention model of boundedly rational multi-attribute choice and leverage it to programmatically select information that is effective in inducing desirable behavioral outcomes. By inferring preferences and cognitive constraints from boundedly rational behavior, our methodology can optimize for revealed welfare and hence promises better alignment with boundedly rational users than recommender systems optimizing for imperfect welfare proxies such as engagement. This has implications beyond economics, for example for alignment research in artificial intelligence. | Cognitive Information Filters: Algorithmic Choice Architecture for Boundedly Rational Choosers | [
"Stefan Bucher",
"Peter Dayan"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=QuVQS62tCp | @inproceedings{
nam2023discrete,
title={Discrete, compositional, and symbolic representations through attractor dynamics},
author={Andrew Nam and Eric Elmoznino and Nikolay Malkin and Chen Sun and Yoshua Bengio and Guillaume Lajoie},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=QuVQS62tCp}
} | Compositionality is an important feature of discrete symbolic systems, such as language and programs, as it enables them to have infinite capacity despite a finite symbol set. It serves as a useful abstraction for reasoning in both cognitive science and in AI, yet the interface between continuous and symbolic processing is often imposed by fiat at the algorithmic level, such as by means of quantization or a softmax sampling step. In this work, we explore how discretization could be implemented in a more neurally plausible manner through the modeling of attractor dynamics that partition the continuous representation space into basins that correspond to sequences of symbols. Building on established work in attractor networks and introducing novel training methods, we show that imposing structure in the symbolic space can produce compositionality in the attractor-supported representation space of rich sensory inputs. Lastly, we argue that our model exhibits the process of an information bottleneck that is thought to play a role in conscious experience, decomposing the rich information of a sensory input into stable components encoding symbolic information. | Discrete, compositional, and symbolic representations through attractor dynamics | [
"Andrew Nam",
"Eric Elmoznino",
"Nikolay Malkin",
"Chen Sun",
"Yoshua Bengio",
"Guillaume Lajoie"
] | Workshop/InfoCog | 2310.01807 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=PfdpAPRxf5 | @inproceedings{
sharafeldin2023active,
title={Active Vision with Predictive Coding and Uncertainty Minimization},
author={Abdelrahman Sharafeldin and Nabil Imam and Hannah Choi},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=PfdpAPRxf5}
} | We present an end-to-end procedure for embodied visual exploration based on two biologically inspired computations: predictive coding and uncertainty minimization. The procedure can be applied in a task-independent and intrinsically driven manner. We evaluate our approach on an active vision task, where an agent actively samples its visual environment to gather information. We show that our model builds unsupervised representations through exploration that allow it to efficiently categorize visual scenes. We further show that using these representations for downstream classification leads to superior data efficiency and learning speed compared to other baselines while maintaining lower parameter complexity. Finally, the modularity of our model allows us to probe its internal mechanisms and analyze the interaction between perception and action during exploratory behavior. | Active Vision with Predictive Coding and Uncertainty Minimization | [
"Abdelrahman Sharafeldin",
"Nabil Imam",
"Hannah Choi"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=OcHrsQox0Z | @inproceedings{
moreno-bote2023empowerment,
title={Empowerment, Free Energy Principle and Maximum Occupancy Principle Compared},
author={Rub{\'e}n Moreno-Bote and Jorge Ramirez-Ruiz},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=OcHrsQox0Z}
} | While the objective of reward maximization in reinforcement learning has lead to impressive achievements in several games and artificial environments, animals seem to be driven by intrinsic signals that are not purely extrinsic, such as curiosity.
Several reward-free approaches have emerged in the fields of cognitive neuroscience and artificial intelligence that primarily make use of signals different from extrinsic rewards to guide exploration and ultimately drive behavior, but a comparison between these approaches is lacking.
Here we focus on two popular reward-free approaches, known as empowerment (MPOW) and free energy principle (FEP), and a recently developed one, called maximum occupancy principle (MOP), and compare them in sequential problems and fully-observable environments.
We find that MPOW shows a preference for unstable fixed points of the dynamical system that defines the agent and environment.
FEP is shown to be equivalent to reward maximization in certain cases.
None of these two principles of behavior seem to consistently generate variable behavior: behavior collapses within a small repertoire of possible action-state trajectories or fixed points. Collapse to an optimal deterministic policy can be proved in specific, recent implementations of FEP, with the only exception of policy degeneracy due to ties.
In contrast, MOP consistently generates variable action-state trajectories.
In two simple environments, a balancing cartpole and a grid world, we find that both MPOW and FEP agents stick to a relatively small set of states and actions, while MOP agents generate short of exploratory and dancing-like motions. | Empowerment, Free Energy Principle and Maximum Occupancy Principle Compared | [
"Rubén Moreno-Bote",
"Jorge Ramirez-Ruiz"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=KeWNER68iP | @inproceedings{
schaeffer2023an,
title={An Information-Theoretic Understanding of Maximum Manifold Capacity Representations},
author={Rylan Schaeffer and Berivan Isik and Victor Lecomte and Mikail Khona and Yann LeCun and Andrey Gromov and Ravid Shwartz-Ziv and Sanmi Koyejo},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=KeWNER68iP}
} | Maximum Manifold Capacity Representations (MMCR) is a recent multi-view self-supervised learning (MVSSL) method that matches or surpasses other leading MVSSL methods. MMCR is interesting for at least two reasons. Firstly, MMCR is an oddity in the zoo of MVSSL methods: it is not (explicitly) contrastive, applies no masking, performs no clustering, leverages no distillation, and does not (explicitly) reduce redundancy. Secondly, while many self-supervised learning (SSL) methods originate in information theory, MMCR distinguishes itself by claiming a different origin: a statistical mechanical characterization of the geometry of linear separability of data manifolds. However, given the rich connections between statistical mechanics and information theory, and given recent work showing how many SSL methods can be understood from an information-theoretic perspective, we conjecture that MMCR can be similarly understood from an information-theoretic perspective. In this paper, we leverage tools from high dimensional probability and information theory to demonstrate that an optimal solution to MMCR's nuclear norm-based objective function is the same optimal solution that maximizes a well-known lower bound on mutual information between views. | An Information-Theoretic Understanding of Maximum Manifold Capacity Representations | [
"Rylan Schaeffer",
"Berivan Isik",
"Victor Lecomte",
"Mikail Khona",
"Yann LeCun",
"Andrey Gromov",
"Ravid Shwartz-Ziv",
"Sanmi Koyejo"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=JAueaO7x1y | @inproceedings{
haber2023unsupervised,
title={Unsupervised estimation of ensemble accuracy},
author={Simi Haber and Yonatan Wexler},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=JAueaO7x1y}
} | Ensemble learning combines several individual models to obtain a better generalization performance. In this work we present a practical method for estimating the joint power of several classifiers. It differs from existing approaches which focus on "diversity" measures by not relying on labels. This makes it both accurate and practical in the modern setting of unsupervised learning with huge datasets.
The heart of the method is a combinatorial bound on the number of mistakes the ensemble is likely to make. The bound can be efficiently approximated in time linear in the number of samples. We relate the bound to actual misclassifications, hence its usefulness as a predictor of performance.
We demonstrate the method on popular large-scale face recognition datasets which provide a useful playground for fine-grain classification tasks using noisy data over many classes. | Unsupervised estimation of ensemble accuracy | [
"Simi Haber",
"Yonatan Wexler"
] | Workshop/InfoCog | 2311.10940 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=I4XOc9n4E2 | @inproceedings{
liu2023attention,
title={Attention Schema in Neural Agents},
author={Dianbo Liu and Samuele Bolotta and Mike He Zhu and Zahra Sheikhbahaee and Yoshua Bengio and Guillaume Dumas},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=I4XOc9n4E2}
} | Attention has become a common ingredient in deep learning architectures. It adds a dynamical selection of information on top of the static selection of information supported by weights. In the same way, we can imagine a higher-order informational filter built on top of attention: an Attention Schema (AS), namely, a descriptive and predictive model of attention. In cognitive neuroscience, Attention Schema Theory (AST) supports this idea of distinguishing attention from AS. A strong prediction of this theory is that an agent can use its own AS to also infer the states of other agents' attention and consequently enhance coordination with other agents. As such, multi-agent reinforcement learning would be an ideal setting to experimentally test the validity of AST. We explore different ways in which attention and AS interact with each other. Our preliminary results indicate that agents that implement the AS as a recurrent internal control achieve the best performance. In general, these exploratory experiments suggest that equipping artificial agents with a model of attention can enhance their social intelligence. | Attention Schema in Neural Agents | [
"Dianbo Liu",
"Samuele Bolotta",
"Mike He Zhu",
"Zahra Sheikhbahaee",
"Yoshua Bengio",
"Guillaume Dumas"
] | Workshop/InfoCog | 2305.17375 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=DERzRHLO5l | @inproceedings{
amir2023states,
title={States as goal-directed concepts: an epistemic approach to state-representation learning},
author={Nadav Amir and Yael Niv and Angela Langdon},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=DERzRHLO5l}
} | Our goals fundamentally shape how we experience the world. For example, when we are hungry, we tend to view objects in our environment according to whether or not they are edible (or tasty). Alternatively, when we are cold, we may view the very same objects according to their ability to produce heat. Computational theories of learning in cognitive systems, such as reinforcement learning, use the notion of "state-representation" to describe how agents decide which features of their environment are behaviorally-relevant and which can be ignored. However, these approaches typically assume "ground-truth" state representations that are known by the agent, and reward functions that need to be learned. Here we suggest an alternative approach in which state-representations are not assumed veridical, or even pre-defined, but rather emerge from the agent's goals through interaction with its environment. We illustrate this novel perspective by inferring the goals driving rat behavior in an odor-guided choice task and discuss its implications for developing, from first principles, an information-theoretic account of goal-directed state representation learning and behavior. | States as goal-directed concepts: an epistemic approach to state-representation learning | [
"Nadav Amir",
"Yael Niv",
"Angela Langdon"
] | Workshop/InfoCog | 2312.02367 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=9oEXdXmMNr | @inproceedings{
imel2023noisy,
title={Noisy Population Dynamics Lead to Efficiently Compressed Semantic Systems},
author={Nathaniel Imel and Richard Futrell and Michael Franke and Noga Zaslavsky},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=9oEXdXmMNr}
} | Converging cross-linguistic evidence suggests that that human vocabularies are shaped for efficient communication, but we know little about the agent-based dynamics that could explain their evolution. In this paper, we show that very general population dynamics of signaling games lead to the emergence of information-theoretically efficient meaning systems. In numerical simulations, we observe that noisy perception of meaning can result in evolved systems with higher efficiency. | Noisy Population Dynamics Lead to Efficiently Compressed Semantic Systems | [
"Nathaniel Imel",
"Richard Futrell",
"Michael Franke",
"Noga Zaslavsky"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=6nszHbR0Qh | @inproceedings{
khajehnejad2023on,
title={On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay},
author={Moein Khajehnejad and Forough Habibollahi and Alon Loeffler and Brett Kagan and Adeel Razi},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=6nszHbR0Qh}
} | In this study, we characterize complex network dynamics in live in vitro neuronal systems during two distinct activity states: spontaneous rest state and engagement in a real-time (closed-loop) game environment using the DishBrain system.
First, we embed the spiking activity of these channels in a lower-dimensional space using various representation learning methods and then extract a subset of representative channels.
Next, by analyzing these low-dimensional representations, we explore the patterns of macroscopic neuronal network dynamics during learning. Remarkably, our findings indicate that just using the low-dimensional embedding of representative channels is sufficient to differentiate the neuronal culture during the Rest and Gameplay.
Notably, our investigation shows dynamic changes in the connectivity patterns within the same region and across multiple regions on the multi-electrode array only during Gameplay. These findings underscore the plasticity of neuronal networks in response to external stimuli and highlight the potential for modulating connectivity in a controlled environment.
The ability to distinguish between neuronal states using reduced-dimensional representations points to the presence of underlying patterns that could be pivotal for real-time monitoring and manipulation of neuronal cultures.
Additionally, this provides insight into how biological based information processing systems rapidly adapt and learn and may lead to new improved algorithms. | On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay | [
"Moein Khajehnejad",
"Forough Habibollahi",
"Alon Loeffler",
"Brett Kagan",
"Adeel Razi"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=6K8c90L2mM | @inproceedings{
schweighofer2023introducing,
title={Introducing an Improved Information-Theoretic Measure of Predictive Uncertainty},
author={Kajetan Schweighofer and Lukas Aichberger and Mykyta Ielanskyi and Sepp Hochreiter},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=6K8c90L2mM}
} | Applying a machine learning model for decision-making in the real world requires to distinguish what the model knows from what it does not. A critical factor in assessing the knowledge of a model is to quantify its predictive uncertainty. Predictive uncertainty is commonly measured by the entropy of the Bayesian model average (BMA) predictive distribution. Yet, the properness of this current measure of predictive uncertainty was recently questioned. We provide new insights regarding those limitations. Our analyses show that the current measure erroneously assumes that the BMA predictive distribution is equivalent to the predictive distribution of the true model that generated the dataset. Consequently, we introduce a theoretically grounded measure to overcome these limitations. We experimentally verify the benefits of our introduced measure of predictive uncertainty. We find that our introduced measure behaves more reasonably in controlled synthetic tasks. Moreover, our evaluations on ImageNet demonstrate that our introduced measure is advantageous in real-world applications utilizing predictive uncertainty. | Introducing an Improved Information-Theoretic Measure of Predictive Uncertainty | [
"Kajetan Schweighofer",
"Lukas Aichberger",
"Mykyta Ielanskyi",
"Sepp Hochreiter"
] | Workshop/InfoCog | 2311.08309 | [
""
] | https://huggingface.co/papers/2311.08309 | 1 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=5uU9TY2oME | @inproceedings{
du2023what,
title={What can {AI} Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration},
author={Yuqing Du and Eliza Kosoy and Alyssa Dayan and Maria Rufova and Pieter Abbeel and Alison Gopnik},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=5uU9TY2oME}
} | What drives exploration? Understanding intrinsic motivation is a long-standing question in both cognitive science and artificial intelligence (AI); numerous exploration objectives have been proposed and tested in human experiments and used to train reinforcement learning (RL) agents. However, experiments in the former are often in simplistic environments that do not capture the complexity of real world exploration. On the other hand, experiments in the latter use more complex environments, yet the trained RL agents fail to come close to human exploration efficiency. To study this gap, we propose a framework for directly comparing human and agent exploration in an open-ended environment, Crafter. We study how well commonly-proposed information theoretic intrinsic objectives relate to actual human and agent behaviors, finding that human and intrinsically-motivated RL agent exploration success consistently show positive correlation with Entropy and Empowerment. However, only human exploration shows significant correlation with Information Gain. In a preliminary analysis of verbalizations, we find that children's verbalizations of goals show a strong positive correlation with Empowerment, suggesting that goal-setting may be an important aspect of efficient exploration. | What can AI Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration | [
"Yuqing Du",
"Eliza Kosoy",
"Alyssa Dayan",
"Maria Rufova",
"Pieter Abbeel",
"Alison Gopnik"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=52jORfVm1S | @inproceedings{
futrell2023natural,
title={Natural Language Systematicity from a Constraint on Excess Entropy},
author={Richard Futrell},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=52jORfVm1S}
} | Natural language is systematic: utterances are composed of individually meaningful parts which are typically concatenated together. I argue that natural-language-like systematicity arises in codes when they are constrained by excess entropy, the mutual information between the past and the future of a process. In three examples, I show that codes with natural-language-like systematicity have lower excess entropy than matched alternatives. | Natural Language Systematicity from a Constraint on Excess Entropy | [
"Richard Futrell"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=4kkYR1kklR | @inproceedings{
kaplanis2023learning,
title={Learning Causally Emergent Representations},
author={Christos Kaplanis and Pedro Mediano and Fernando Rosas},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=4kkYR1kklR}
} | Cognitive processes usually take place at a macroscopic scale in systems characterised by emergent properties, which make the whole `more than the sum of its parts.' While recent proposals have provided quantitative, information-theoretic metrics to detect emergence in time series data, it is often highly non-trivial to identify the relevant macroscopic variables a priori. In this paper we leverage recent advances in representation learning and differentiable information estimators to put forward a data-driven method to find emergent variables. The proposed method successfully detects emergent variables and recovers the ground-truth emergence values in a synthetic dataset. This proof-of-concept paves the ground for future analyses uncovering the emergent structure of cognitive representations in biological and artificial intelligence systems. | Learning Causally Emergent Representations | [
"Christos Kaplanis",
"Pedro Mediano",
"Fernando Rosas"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=20z7qjzqJs | @inproceedings{
amand2023variable,
title={Variable Selection in {GPDM}s Using the Information Bottleneck Method},
author={Jesse St. Amand and Martin Giese},
booktitle={NeurIPS 2023 workshop: Information-Theoretic Principles in Cognitive Systems},
year={2023},
url={https://openreview.net/forum?id=20z7qjzqJs}
} | Accurate real-time models of human motion are important for applications in areas such as cognitive science and robotics. Neural networks are often the favored choice, yet their generalization properties are limited, particularly on small data sets. This paper utilizes the Gaussian process dynamical model (GPDM) as an alternative. Despite their successes in various motion tasks, GPDMs face challenges like high computational complexity and the need for many hyperparameters. This work addresses these issues by integrating the information bottleneck (IB) framework with GPDMs. The IB approach aims to optimally balance data fit and generalization through measures of mutual information. Our technique uses IB variable selection as a component of GPLVM back-constraints to reduce parameter count and to select features for latent space optimization, resulting in improved model accuracy. | Variable Selection in GPDMs Using the Information Bottleneck Method | [
"Jesse St. Amand",
"Martin Giese"
] | Workshop/InfoCog | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=zhES2B5mdv | @inproceedings{
anwar2023noisy,
title={Noisy {ZSC}: Breaking The Common Knowledge Assumption In Zero-Shot Coordination Games},
author={Usman Anwar and Jia Wan and David Krueger and Jakob Nicolaus Foerster},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=zhES2B5mdv}
} | Zero-shot coordination (ZSC) is a popular setting for studying the ability of AI agents to coordinate with novel partners. Prior formulations of ZSC make the assumption that the problem setting is common knowledge i.e. each agent has the knowledge of the underlying Dec-POMDP, every agent knows the others have this knowledge, and so on ad infinitum. However, in most real-world situations, different agents are likely to have different models of the (real world) environment, thus breaking this assumption. To address this limitation, we formulate the _noisy zero-shot coordination_ (NZSC) problem, where agents observe different noisy versions of the ground truth Dec-POMDP generated by passing the true Dec-POMDP through a noise model. Only the distribution of the ground truth Dec-POMDPs and the noise model are common knowledge. We show that any noisy ZSC problem can be reformulated as a ZSC problem by designing a meta-Dec-POMDP with an augmented state space consisting of both the ground truth Dec-POMDP and its corresponding state. In our experiments, we analyze various aspects of NZSC and show that achieving good performance in NZSC requires agents to make use of both the noisy observations of ground truth Dec-POMDP, knowledge of each other's noise models and their interactions with the ground truth Dec-POMDP. Through experimental results, we further establish that ignoring the noise in problem specification can result in sub-par ZSC coordination performance, especially in iterated scenarios. On the whole, our work highlights that NZSC adds an orthogonal challenge to traditional ZSC in tackling the uncertainty about the true problem. | Noisy ZSC: Breaking The Common Knowledge Assumption In Zero-Shot Coordination Games | [
"Usman Anwar",
"Jia Wan",
"David Krueger",
"Jakob Nicolaus Foerster"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=zaCQjtvSGG | @inproceedings{
niu2023stackelberg,
title={Stackelberg Driver Model for Continual Policy Improvement in Scenario-Based Closed-Loop Autonomous Driving},
author={Haoyi Niu and Qimao Chen and Yingyue Li and Yi ZHANG and Jianming HU},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=zaCQjtvSGG}
} | The deployment of autonomous vehicles (AVs) has faced hurdles due to the dominance of rare but critical corner cases within the long-tail distribution of driving scenarios, which negatively affects their overall performance. To address this challenge, adversarial generation methods have emerged as a class of efficient approaches to synthesize safety-critical scenarios for AV testing. However, these generated scenarios are often underutilized for AV training, resulting in the potential for continual AV policy improvement remaining untapped, along with a deficiency in the closed-loop design needed to achieve it. Therefore, we tailor the Stackelberg Driver Model (SDM) to accurately characterize the hierarchical nature of vehicle interaction dynamics, facilitating iterative improvement by engaging background vehicles (BVs) and AV in a sequential game-like interaction paradigm. With AV acting as the leader and BVs as followers, this leader-follower modeling ensures that AV would consistently refine its policy, always taking into account the additional information that BVs play the best response to challenge AV. Extensive experiments have shown that our algorithm exhibits superior performance compared to several baselines especially in higher dimensional scenarios, leading to substantial advancements in AV capabilities while continually generating progressively challenging scenarios. | Stackelberg Driver Model for Continual Policy Improvement in Scenario-Based Closed-Loop Autonomous Driving | [
"Haoyi Niu",
"Qimao Chen",
"Yingyue Li",
"Yi ZHANG",
"Jianming HU"
] | Workshop/ALOE | 2309.14235 | [
"https://github.com/BlueCat-de/SDM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=zHlQXFPUUy | @inproceedings{
sullivan2023syllabus,
title={Syllabus: Curriculum Learning Made Easy},
author={Ryan Sullivan},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=zHlQXFPUUy}
} | Curriculum learning has been a quiet yet crucial component of many of the high-profile successes of reinforcement learning. Despite this, none of the major reinforcement learning libraries support curriculum learning or include curriculum learning algorithms. Curriculum learning methods can provide general and complementary improvements to RL algorithms, but they often require significant, complex changes to agent training code. We introduce Syllabus, a library for training RL agents with curriculum learning, as a solution to this problem. Syllabus provides a universal API for implementing curriculum learning algorithms, a collection of implementations of popular curriculum learning methods, and infrastructure for easily integrating them into existing distributed RL code. Syllabus provides a clean API for each of the complex components of these methods, dramatically simplifying the process for designing new algorithms or applying existing algorithms to new environments. Syllabus also manages the multiprocessing communication required for curriculum learning, alleviating one of the key practical challenges of using these algorithms. We hope Syllabus will improve the process of developing and applying curriculum learning algorithms, and encourage widespread adaptation of curriculum learning. | Syllabus: Curriculum Learning Made Easy | [
"Ryan Sullivan"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yrsADvEk89 | @inproceedings{
diaz2023rethinking,
title={Rethinking Teacher-Student Curriculum Learning under the Cooperative Mechanics of Experience},
author={Manfred Diaz and Liam Paull and Andrea Tacchetti},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=yrsADvEk89}
} | Teacher-Student Curriculum Learning (TSCL) is a curriculum learning framework that draws inspiration from human cultural transmission and learning. It involves a teacher algorithm shaping the learning process of a learner algorithm by exposing it to controlled experiences. Despite its success, understanding the conditions under which TSCL is effective remains challenging. In this paper, we propose a data-centric perspective to analyze the underlying mechanics of the teacher-student interactions in TSCL. We leverage cooperative game theory to describe how the composition of the set of experiences presented by the teacher to the learner, as well as their order, influences the performance of the curriculum that are found by TSCL approaches. To do so, we demonstrate that for every TSCL problem, there exists an equivalent cooperative game, and several key components of the TSCL framework can be reinterpreted using game-theoretic principles. Through experiments covering supervised learning, reinforcement learning, and classical games, we estimate the cooperative values of experiences and use value-proportional curriculum mechanisms to construct curricula, even in cases where TSCL struggles. The framework and experimental setup we present in this work represent a foundation that can be used for a deeper exploration of TSCL, shedding light on its underlying mechanisms and providing insights into its broader applicability in machine learning. | Rethinking Teacher-Student Curriculum Learning under the Cooperative Mechanics of Experience | [
"Manfred Diaz",
"Liam Paull",
"Andrea Tacchetti"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yJRQXHpwd4 | @inproceedings{
samvelyan2023multiagent,
title={Multi-Agent Diagnostics for Robustness via Illuminated Diversity},
author={Mikayel Samvelyan and Davide Paglieri and Minqi Jiang and Jack Parker-Holder and Tim Rockt{\"a}schel},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=yJRQXHpwd4}
} | In the rapidly advancing field of multi-agent systems, ensuring robustness in unfamiliar and adversarial settings is crucial, particularly for those systems deployed in real-world scenarios. Notwithstanding their outstanding performance in familiar environments, these systems often falter in new situations due to overfitting during the training phase. This is especially pronounced in settings where both cooperative and competitive behaviours are present, encapsulating a dual nature of overfitting and generalisation challenges. To address this issue, we present Multi-Agent Diagnostics for Robustness via Illuminated Diversity (MADRID), a novel approach for systematically generating diverse adversarial scenarios that expose strategic vulnerabilities in pre-trained multi-agent policies. Leveraging the concepts from open-ended learning, MADRID navigates the vast space of adversarial settings, employing a target policy's regret to gauge the vulnerabilities of these settings. We evaluate the effectiveness of MADRID on the 11 vs 11 version of Google Research Football, one of the most complex environments for multi-agent reinforcement learning. Specifically, we employ MADRID for generating a diverse array of adversarial settings for TiZero, the state-of-the-art approach which "masters" the game through 45 days of training on a large-scale distributed infrastructure. Using MADRID, we expose key shortcomings in TiZero's tactical decision-making, underlining the crucial importance of rigorous evaluation in multi-agent systems. | Multi-Agent Diagnostics for Robustness via Illuminated Diversity | [
"Mikayel Samvelyan",
"Davide Paglieri",
"Minqi Jiang",
"Jack Parker-Holder",
"Tim Rocktäschel"
] | Workshop/ALOE | 2401.13460 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xzPkZyHlOW | @inproceedings{
wang2023jarvis,
title={{JARVIS}-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models},
author={Zihao Wang and Shaofei Cai and Anji Liu and Xiaojian Ma and Yitao Liang},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=xzPkZyHlOW}
} | Achieving human-like planning and control with multimodal observations in an open world is a key milestone for more functional generalist agents. Existing approaches can handle certain long-horizon tasks in an open world. However, they still struggle when the number of open-world tasks could potentially be infinite and lack the capability to progressively enhance task completion as game time progresses. We introduce JARVIS-1, an open-world agent that can perceive multimodal input (visual observations and human instructions), generate sophisticated plans, and perform embodied control, all within the popular yet challenging open-world Minecraft universe. Specifically, we develop JARVIS-1 on top of pre-trained multimodal language models, which map visual observations and textual instructions to plans. The plans will be ultimately dispatched to the goal-conditioned controllers. We outfit JARVIS-1 with a multimodal memory, which facilitates planning using both pre-trained knowledge and its actual game survival experiences. In our experiments, JARVIS-1 exhibits nearly perfect performances across over 200 varying tasks from the Minecraft Universe Benchmark, ranging from entry to intermediate levels. JARVIS-1 has achieved a completion rate of 12.5% in the long-horizon diamond pickaxe task. This represents a significant increase up to 5 times compared to previous records. Furthermore, we show that JARVIS-1 is able to self-improve following a life-long learning paradigm thanks to multimodal memory, sparking more general intelligence and improved autonomy. The project page is available at https://craftjarvis-jarvis1.github.io. | JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models | [
"Zihao Wang",
"Shaofei Cai",
"Anji Liu",
"Xiaojian Ma",
"Yitao Liang"
] | Workshop/ALOE | 2311.05997 | [
""
] | https://huggingface.co/papers/2311.05997 | 7 | 36 | 1 | 12 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=vxZgTbmC4L | @inproceedings{
jiang2023minimax,
title={minimax: Efficient Baselines for Autocurricula in {JAX}},
author={Minqi Jiang and Michael D Dennis and Edward Grefenstette and Tim Rockt{\"a}schel},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=vxZgTbmC4L}
} | Unsupervised environment design (UED) is a form of automatic curriculum learning for training robust decision-making agents to zero-shot transfer into unseen environments. Such autocurricula have received much interest from the RL community. However, UED experiments, based on CPU rollouts and GPU model updates, have often required several weeks of training. This compute requirement is a major obstacle to rapid innovation for the field. This work introduces the minimax library for UED training on accelerated hardware. Using JAX to implement fully-tensorized environments and autocurriculum algorithms, minimax allows the entire training loop to be compiled for hardware acceleration. To provide a petri dish for rapid experimentation, minimax includes a tensorized grid-world based on MiniGrid, in addition to reusable abstractions for conducting autocurricula in procedurally-generated environments. With these components, minimax provides strong UED baselines, including new parallelized variants, which achieve over 120$\times$ speedups in wall time compared to previous implementations when training with equal batch sizes. The minimax library is available under the Apache 2.0 license at https://github.com/facebookresearch/minimax. | minimax: Efficient Baselines for Autocurricula in JAX | [
"Minqi Jiang",
"Michael D Dennis",
"Edward Grefenstette",
"Tim Rocktäschel"
] | Workshop/ALOE | 2311.12716 | [
"https://github.com/facebookresearch/minimax"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=uEdE2AJn2D | @inproceedings{
pourcel2023aces,
title={{ACES}: generating diverse programming puzzles with autotelic language models and semantic descriptors},
author={Julien Pourcel and C{\'e}dric Colas and Pierre-Yves Oudeyer and Laetitia Teodorescu},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=uEdE2AJn2D}
} | Finding and selecting new and interesting problems to solve is at the heart of curiosity, science and innovation. We here study automated problem generation in the context of the open-ended space of python programming puzzles. Existing generative models often aim at modeling a reference distribution without any explicit diversity optimization. Other methods explicitly optimizing for diversity do so either in limited hand-coded representation spaces or in uninterpretable learned embedding spaces that may not align with human perceptions of interesting variations. With ACES (Autotelic Code Exploration via Semantic descriptors), we introduce a new autotelic generation method that leverages semantic descriptors produced by a large language model (LLM) to directly optimize for interesting diversity, as well as few-shot-based generation. Each puzzle is labeled along 10 dimensions, each capturing a programming skill required to solve it. ACES generates and pursues novel and feasible goals to explore that abstract semantic space, slowly discovering a diversity of solvable programming puzzles in any given run. Across a set of experiments, we show that ACES discovers a richer diversity of puzzles than existing diversity-maximizing algorithms as measured across a range of diversity metrics. We further study whether and in which conditions this diversity can translate into the successful training of puzzle solving models. | ACES: generating diverse programming puzzles with autotelic language models and semantic descriptors | [
"Julien Pourcel",
"Cédric Colas",
"Pierre-Yves Oudeyer",
"Laetitia Teodorescu"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tWYUpG6aFE | @inproceedings{
jacq2023on,
title={On the importance of data collection for training general goal-reaching policies.},
author={Alexis D. Jacq and Manu Orsini and Gabriel Dulac-Arnold and Olivier Pietquin and Matthieu Geist and Olivier Bachem},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=tWYUpG6aFE}
} | Recent advances in ML suggest that the quantity of data available to a model is one of the primary bottlenecks to high performance. Although for language-based tasks there exist almost unlimited amounts of reasonably coherent data to train from, this is generally not the case for Reinforcement Learning, especially when dealing with a novel environment. In effect, even a relatively trivial continuous environment has an almost limitless number of states, but simply sampling random states and actions will likely not provide transitions that are interesting or useful for any potential downstream task. \textit{How should one generate massive amounts of useful data given only an MDP with no indication of downstream tasks? Are the quantity and quality of data truly transformative to the performance of a general controller?} We propose to answer both of these questions. First, we introduce a principled unsupervised exploration method, ChronoGEM, which aims to achieve uniform coverage over the manifold of achievable states, which we believe is the most reasonable goal given no prior task information. Secondly, we investigate the effects of both data quantity and data quality on the training of a downstream goal-achievement policy, and show that both large quantities and high-quality of data are essential to train a general controller: a high-precision pose-achievement policy capable of attaining a large number of poses over numerous continuous control embodiments including humanoid. | On the importance of data collection for training general goal-reaching policies. | [
"Alexis D. Jacq",
"Manu Orsini",
"Gabriel Dulac-Arnold",
"Olivier Pietquin",
"Matthieu Geist",
"Olivier Bachem"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=sFF9UIh3iK | @inproceedings{
nam2023lift,
title={Li{FT}: Unsupervised Reinforcement Learning with Foundation Models as Teachers},
author={Taewook Nam and Juyong Lee and Jesse Zhang and Sung Ju Hwang and Joseph J Lim and Karl Pertsch},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=sFF9UIh3iK}
} | We propose a framework that leverages foundation models as teachers, guiding a reinforcement learning agent to acquire semantically meaningful behavior without human intervention.
In our framework, the agent receives task instructions grounded in a training environment from large language models.
Then, a vision-language model guides the agent in learning the tasks by providing reward feedback.
We demonstrate that our method can learn semantically meaningful skills in a challenging open-ended MineDojo environment, while prior works on unsupervised skill discovery methods struggle.
Additionally, we discuss the observed challenges of using off-the-shelf foundation models as teachers and our efforts to address them. | LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers | [
"Taewook Nam",
"Juyong Lee",
"Jesse Zhang",
"Sung Ju Hwang",
"Joseph J Lim",
"Karl Pertsch"
] | Workshop/ALOE | 2312.08958 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=rmiwIL98uQ | @inproceedings{
zhou2023webarena,
title={WebArena: A Realistic Web Environment for Building Autonomous Agents},
author={Shuyan Zhou and Frank F. Xu and Hao Zhu and Xuhui Zhou and Robert Lo and Abishek Sridhar and Xianyi Cheng and Tianyue Ou and Yonatan Bisk and Daniel Fried and Uri Alon and Graham Neubig},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=rmiwIL98uQ}
} | With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that \ours can be used to measure such progress. | WebArena: A Realistic Web Environment for Building Autonomous Agents | [
"Shuyan Zhou",
"Frank F. Xu",
"Hao Zhu",
"Xuhui Zhou",
"Robert Lo",
"Abishek Sridhar",
"Xianyi Cheng",
"Tianyue Ou",
"Yonatan Bisk",
"Daniel Fried",
"Uri Alon",
"Graham Neubig"
] | Workshop/ALOE | 2307.13854 | [
"https://github.com/web-arena-x/webarena"
] | https://huggingface.co/papers/2307.13854 | 7 | 23 | 4 | 11 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=reHXok5jiQ | @inproceedings{
klissarov2023motif,
title={Motif: Intrinsic Motivation from Artificial Intelligence Feedback},
author={Martin Klissarov and Pierluca D'Oro and Shagun Sodhani and Roberta Raileanu and Pierre-Luc Bacon and Pascal Vincent and Amy Zhang and Mikael Henaff},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=reHXok5jiQ}
} | Exploring rich environments and evaluating one's actions without prior knowledge is immensely challenging. In this paper, we propose Motif, a general method to interface such prior knowledge from a Large Language Model (LLM) with an agent. Motif is based on the idea of grounding LLMs for decision-making without requiring them to interact with the environment: it elicits preferences from an LLM over pairs of captions to construct an intrinsic reward, which is then used to train agents with reinforcement learning. We evaluate Motif's performance and behavior on the challenging, open-ended and procedurally-generated NetHack game. Surprisingly, by only learning to maximize its intrinsic reward, Motif achieves a higher game score than an algorithm directly trained to maximize the score itself. When combining Motif's intrinsic reward with the environment reward, our method significantly outperforms existing approaches and makes progress on tasks where no advancements have ever been made without demonstrations. Finally, we show that Motif mostly generates intuitive human-aligned behaviors which can be steered easily through prompt modifications, while scaling well with the LLM size and the amount of information given in the prompt. | Motif: Intrinsic Motivation from Artificial Intelligence Feedback | [
"Martin Klissarov",
"Pierluca D'Oro",
"Shagun Sodhani",
"Roberta Raileanu",
"Pierre-Luc Bacon",
"Pascal Vincent",
"Amy Zhang",
"Mikael Henaff"
] | Workshop/ALOE | 2310.00166 | [
"https://github.com/facebookresearch/motif"
] | https://huggingface.co/papers/2310.00166 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=qiKqsqwYXm | @inproceedings{
fan2023doge,
title={{DOGE}: Domain Reweighting with Generalization Estimation},
author={Simin Fan and Matteo Pagliardini and Martin Jaggi},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=qiKqsqwYXm}
} | The coverage and composition of the pretraining data corpus significantly impacts the generalization ability of large language models. Conventionally, the pretraining corpus is composed of various source domains (e.g. CommonCrawl, Wikipedia, Github etc.) according to certain sampling probabilities (domain weights). However, current methods lack a principled way to optimize domain weights for ultimate goal for generalization.
We propose \textsc{DO}main reweighting with \textsc{G}eneralization \textsc{E}stimation (DoGE), where we reweigh the sampling probability from each domain based on its contribution to the final generalization objective assessed by a gradient-based generalization estimation function.
First, we train a small-scale proxy model with a min-max optimization to obtain the reweighted domain weights. At each step, the domain weights are updated to maximize the overall generalization gain by mirror descent. Finally we use the obtained domain weights to train a larger scale full-size language model. On SlimPajama-6B dataset, with universal generalization objective, DoGE achieves better average perplexity and zero-shot reasoning accuracy. On out-of-domain generalization tasks, DoGE reduces perplexity on the target domain by a large margin. We further apply a parameter-selection scheme which improves the efficiency of generalization estimation. | DOGE: Domain Reweighting with Generalization Estimation | [
"Simin Fan",
"Matteo Pagliardini",
"Martin Jaggi"
] | Workshop/ALOE | 2310.15393 | [
""
] | https://huggingface.co/papers/2310.15393 | 2 | 1 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=pAMNKGwja6 | @inproceedings{
wang2023voyager,
title={Voyager: An Open-Ended Embodied Agent with Large Language Models},
author={Guanzhi Wang and Yuqi Xie and Yunfan Jiang and Ajay Mandlekar and Chaowei Xiao and Yuke Zhu and Linxi Fan and Anima Anandkumar},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=pAMNKGwja6}
} | We introduce Voyager, the first LLM-powered embodied lifelong learning agent in an open-ended world that continuously explores, acquires diverse skills, and makes novel discoveries without human intervention in Minecraft. Voyager consists of three key components: 1) an automatic curriculum that maximizes exploration, 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent’s capability rapidly and alleviates catastrophic forgetting. Empirically, Voyager demonstrates strong in-context lifelong learning capabilities. It outperforms prior SOTA by obtaining 3.1x more unique items, unlocking tech tree milestones up to 15.3x faster, and traveling 2.3x longer distances. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize. | Voyager: An Open-Ended Embodied Agent with Large Language Models | [
"Guanzhi Wang",
"Yuqi Xie",
"Yunfan Jiang",
"Ajay Mandlekar",
"Chaowei Xiao",
"Yuke Zhu",
"Linxi Fan",
"Anima Anandkumar"
] | Workshop/ALOE | 2305.16291 | [
"https://github.com/MineDojo/Voyager"
] | https://huggingface.co/papers/2305.16291 | 4 | 9 | 4 | 8 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=o32DXGiMoT | @inproceedings{
bhati2023curriculum,
title={Curriculum Learning for Cooperation in Multi-Agent Reinforcement Learning},
author={Rupali Bhati and SaiKrishna Gottipati and Clod{\'e}ric Mars and Matthew E. Taylor},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=o32DXGiMoT}
} | While there has been significant progress in curriculum learning and continuous learning for training agents to generalize across a wide variety of environments in the context of single-agent reinforcement learning, it is unclear if these algorithms would still be valid in a multi-agent setting. In a competitive setting, a learning agent can be trained by making it compete with a curriculum of increasingly skilled opponents. However, a general intelligent agent should also be able to learn to act around other agents and cooperate with them to achieve common goals. When cooperating with other agents, the learning agent must (a) learn how to perform the task (or subtask), and (b) increase the overall team reward. In this paper, we aim to answer the question of what kind of cooperative teammate, and a curriculum of teammates should a learning agent be trained with to achieve these two objectives. Our results on the game Overcooked show that a pre-trained teammate who is less skilled is the best teammate for overall team reward but the worst for the learning of the agent. Moreover, somewhat surprisingly, a curriculum of teammates with decreasing skill levels performs better than other types of curricula. | Curriculum Learning for Cooperation in Multi-Agent Reinforcement Learning | [
"Rupali Bhati",
"SaiKrishna Gottipati",
"Clodéric Mars",
"Matthew E. Taylor"
] | Workshop/ALOE | 2312.11768 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=nz2vqJI1fk | @inproceedings{
niu2023continual,
title={Continual Driving Policy Optimization with Closed-Loop Individualized Curricula},
author={Haoyi Niu and Yizhou Xu and Xingjian Jiang and Jianming HU},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=nz2vqJI1fk}
} | The safety of autonomous vehicles (AV) has been a long-standing top concern, stemming from the absence of rare and safety-critical scenarios in the long-tail naturalistic driving distribution. To tackle this challenge, a surge of research in scenario-based autonomous driving has emerged, with a focus on generating high-risk driving scenarios and applying them to conduct safety-critical testing of AV models. However, limited work has been explored on the reuse of these extensive scenarios to iteratively improve AV models. Moreover, it remains intractable and challenging to filter through gigantic scenario libraries collected from other AV models with distinct behaviors, attempting to extract transferable information for current AV improvement. Therefore, we develop a continual driving policy optimization framework featuring Closed-Loop Individualized Curricula (CLIC), which we factorize into a set of standardized sub-modules for flexible implementation choices: AV Evaluation, Scenario Selection, and AV Training. CLIC frames AV Evaluation as a collision prediction task, where it estimates the chance of AV failures in these scenarios at each iteration. Subsequently, by re-sampling from historical scenarios based on these failure probabilities, CLIC tailors individualized curricula for downstream training, aligning them with the evaluated capability of AV. Accordingly, CLIC not only maximizes the utilization of the vast pre-collected scenario library for closed-loop driving policy optimization but also facilitates AV improvement by individualizing its training with more challenging cases out of those poorly organized scenarios. Experimental results clearly indicate that CLIC surpasses other curriculum-based training strategies, showing substantial improvement in managing risky scenarios, while still maintaining proficiency in handling simpler cases. | Continual Driving Policy Optimization with Closed-Loop Individualized Curricula | [
"Haoyi Niu",
"Yizhou Xu",
"Xingjian Jiang",
"Jianming HU"
] | Workshop/ALOE | 2309.14209 | [
"https://github.com/YizhouXu-THU/CLIC"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=mJSQr1pciC | @inproceedings{
bornemann2023emergence,
title={Emergence of collective open-ended exploration from Decentralized Meta-Reinforcement learning},
author={Richard Bornemann and Gautier Hamon and Eleni Nisioti and Cl{\'e}ment Moulin-Frier},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=mJSQr1pciC}
} | Recent works have proven that intricate cooperative behaviors can emerge in agents trained using meta reinforcement learning on open ended task distributions using self-play. While the results are impressive, we argue that self-play and other centralized training techniques do not accurately reflect how general collective exploration strategies emerge in the natural world: through decentralized training and over an open-ended distribution of tasks. In this work we therefore investigate the emergence of collective exploration strategies, where several agents meta-learn independent recurrent policies on an open ended distribution of tasks. To this end we introduce a novel environment with an open ended procedurally generated task space which dynamically combines multiple subtasks sampled from five diverse task types to form a vast distribution of task trees. We show that decentralized agents trained in our environment exhibit strong generalization abilities when confronted with novel objects at test time. Additionally, despite never being forced to cooperate during training the agents learn collective exploration strategies which allow them to solve novel tasks never encountered during training. We further find that the agents learned collective exploration strategies extend to an open ended task setting, allowing them to solve task trees of twice the depth compared to the ones seen during training. Our open source code as well as videos of the agents can be found on \href{https://sites.google.com/view/collective-open-ended-explore}{our companion website} | Emergence of collective open-ended exploration from Decentralized Meta-Reinforcement learning | [
"Richard Bornemann",
"Gautier Hamon",
"Eleni Nisioti",
"Clément Moulin-Frier"
] | Workshop/ALOE | 2311.00651 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=mBA5gdd8Tg | @inproceedings{
kayal2023does,
title={Does behavioral diversity in intrinsic rewards help exploration?},
author={Aya Kayal and Eduardo Pignatelli and Laura Toni},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=mBA5gdd8Tg}
} | In recent years, intrinsic reward approaches have attracted the attention of the research community due to their ability to address various challenges in Reinforcement Learning, among which, exploration and diversity. Nevertheless, the two areas of study have seldom met. Many intrinsic rewards have been proposed to address the hard exploration problem by reducing the uncertainty of states/environment. Other intrinsic rewards were proposed to favor the agent's behavioral diversity, providing benefits of robustness, fast adaptation, and solving hierarchical tasks. We aim to investigate whether pushing for behavioral diversity can also be a way to favor exploration in sparse reward environments. The goal of this paper is to reinterpret the intrinsic reward approaches proposed in the literature, providing a new taxonomy based on the diversity level they impose on the exploration behavior, and complement it with an empirical study. Specifically, we define two main categories of exploration: "Where to explore'' and "How to explore''. The former favors exploration by imposing diversity on the states or state transitions (state and state + dynamics levels). The latter ("How to explore'') rather pushes the agent to discover diverse policies that can elicit diverse behaviors (policy and skill levels). In the literature, it is unclear how the second category behaves compared to the first category. Thus, we conduct an initial study on MiniGrid environment to compare the impact of selected intrinsic rewards imposing different diversity levels on a variety of tasks. | Does behavioral diversity in intrinsic rewards help exploration? | [
"Aya Kayal",
"Eduardo Pignatelli",
"Laura Toni"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=l92WEMwU8V | @inproceedings{
zhang2023omni,
title={{OMNI}: Open-endedness via Models of human Notions of Interestingness},
author={Jenny Zhang and Joel Lehman and Kenneth Stanley and Jeff Clune},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=l92WEMwU8V}
} | Open-ended algorithms aim to learn new, interesting behaviors forever. That requires a vast environment search space, but there are thus infinitely many possible tasks. Even after filtering for tasks the current agent can learn (i.e., learning progress), countless learnable yet uninteresting tasks remain (e.g., minor variations of previously learned tasks). An Achilles Heel of open-endedness research is the inability to quantify (and thus prioritize) tasks that are not just learnable, but also $\textit{interesting}$ (e.g., worthwhile and novel). We propose solving this problem by $\textit{Open-endedness via Models of human Notions of Interestingness}$ (OMNI). The insight is that we can utilize large (language) models (LMs) as a model of interestingness (MoI), because they $\textit{already}$ internalize human concepts of interestingness from training on vast amounts of human-generated data, where humans naturally write about what they find interesting or boring. We show that LM-based MoIs improve open-ended learning by focusing on tasks that are both learnable $\textit{and interesting}$, outperforming baselines based on uniform task sampling or learning progress alone. This approach has the potential to dramatically advance the ability to intelligently select which tasks to focus on next (i.e., auto-curricula), and could be seen as AI selecting its own next task to learn, facilitating self-improving AI and AI-Generating Algorithms. | OMNI: Open-endedness via Models of human Notions of Interestingness | [
"Jenny Zhang",
"Joel Lehman",
"Kenneth Stanley",
"Jeff Clune"
] | Workshop/ALOE | 2306.01711 | [
"https://github.com/jennyzzt/omni"
] | https://huggingface.co/papers/2306.01711 | 2 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=jfGsAxBOq1 | @inproceedings{
boldi2023objectives,
title={Objectives Are All You Need: Solving Deceptive Problems Without Explicit Diversity Maintenance},
author={Ryan Boldi and Li Ding and Lee Spector},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=jfGsAxBOq1}
} | Navigating deceptive domains has often been a challenge in machine learning due to search algorithms getting stuck at sub-optimal local optima. Many algorithms have been proposed to navigate these domains by explicitly maintaining diversity or equivalently promoting exploration, such as Novelty Search or other so-called Quality Diversity algorithms. In this paper, we present an approach with promise to solve deceptive domains without explicit diversity maintenance by optimizing a potentially large set of defined objectives. These objectives can be extracted directly from the environment by sub-aggregating the raw performance of individuals in a variety of ways. We use lexicase selection to optimize for these objectives as it has been shown to implicitly maintain population diversity. We compare this technique with a varying number of objectives to a commonly used quality diversity algorithm, MAP-Elites, on a set of discrete optimization as well as reinforcement learning domains with varying degrees of deception. We find that decomposing objectives into many objectives and optimizing them outperforms MAP-Elites on the deceptive domains that we explore. Furthermore, we find that this technique results in competitive performance on the diversity-focused metrics of QD-Score and Coverage, without explicitly optimizing for these things. Our ablation study shows that this technique is robust to different subaggregation techniques. However, when it comes to non-deceptive, or ``illumination" domains, quality diversity techniques generally outperform our objective-based framework with respect to exploration (but not exploitation), hinting at potential directions for future work. | Objectives Are All You Need: Solving Deceptive Problems Without Explicit Diversity Maintenance | [
"Ryan Boldi",
"Li Ding",
"Lee Spector"
] | Workshop/ALOE | 2311.02283 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=h7obi6WSuK | @inproceedings{
wu2023smartplay,
title={SmartPlay : A Benchmark for {LLM}s as Intelligent Agents},
author={Yue Wu and Xuan Tang and Tom Mitchell and Yuanzhi Li},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=h7obi6WSuK}
} | Recent large language models (LLMs) have demonstrated great potential toward intelligent agents and next-gen automation, but there currently lacks a systematic benchmark for evaluating LLMs' abilities as agents. We introduce SmartPlay: both a challenging benchmark and a methodology for evaluating LLMs as agents. SmartPlay consists of 6 different games, including Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique setting, providing up to 20 evaluation settings and infinite environment variations. Each game in SmartPlay uniquely challenges a subset of 9 important capabilities of an intelligent LLM agent, including reasoning with object dependencies, planning ahead, spatial reasoning, learning from history, and understanding randomness. The distinction between the set of capabilities each game test allows us to analyze each capability separately.
SmartPlay serves not only as a rigorous testing ground for evaluating the overall performance of LLM agents but also as a road-map for identifying gaps in current methodologies.
We release our benchmark at github.com/LLMsmartplay/SmartPlay. | SmartPlay : A Benchmark for LLMs as Intelligent Agents | [
"Yue Wu",
"Xuan Tang",
"Tom Mitchell",
"Yuanzhi Li"
] | Workshop/ALOE | [
"https://github.com/microsoft/smartplay"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=gS4T75axwE | @inproceedings{
schmidt2023learning,
title={Learning to Act without Actions},
author={Dominik Schmidt and Minqi Jiang},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=gS4T75axwE}
} | Pre-training large models on vast amounts of web data has proven to be an effective approach for obtaining powerful, general models in several domains, including language and vision. However, this paradigm has not yet taken hold in deep reinforcement learning (RL). This gap is due to the fact that the most abundant form of embodied behavioral data on the web consists of videos, which do not include the action labels required by existing methods for training policies from offline data. We introduce Latent Action Policies from Observation (LAPO), a method to infer latent actions and, consequently, latent-action policies purely from action-free demonstrations. Our experiments on challenging procedurally-generated environments show that LAPO can act as an effective pre-training method to obtain RL policies that can then be rapidly fine-tuned to expert-level performance. Our approach serves as a key stepping stone to enabling the pre-training of powerful, generalist RL models on the vast amounts of action-free demonstrations readily available on the web. | Learning to Act without Actions | [
"Dominik Schmidt",
"Minqi Jiang"
] | Workshop/ALOE | 2312.10812 | [
"https://github.com/schmidtdominik/lapo"
] | https://huggingface.co/papers/2312.10812 | 1 | 2 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=gCPSeal6G8 | @inproceedings{
cipolina-kun2023adaptive,
title={Adaptive Coalition Structure Generation},
author={Lucia Cipolina-Kun and Ignacio Carlucho and Kalesha Bullard},
booktitle={Second Agent Learning in Open-Endedness Workshop},
year={2023},
url={https://openreview.net/forum?id=gCPSeal6G8}
} | We introduce a Deep Reinforcement Learning (DRL) framework to form socially-optimal coalitions in an adaptive manner. In our approach, agents play a deal-or-no-deal game where each state represents a potential coalition to join. Agents learn to form coalitions that are mutually beneficial, without revealing the coalition value to each other. We conducted an empirical evaluation of our model's generalizability on a ridesharing spatial game. | Adaptive Coalition Structure Generation | [
"Lucia Cipolina-Kun",
"Ignacio Carlucho",
"Kalesha Bullard"
] | Workshop/ALOE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.