bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=kLwwaBdWAJ
@inproceedings{ versteeg2023expressive, title={Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity}, author={Christopher Versteeg and Andrew Sedler and Jonathan McCart and Chethan Pandarinath}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=kLwwaBdWAJ} }
An emerging framework in neuroscience uses the rules that govern how a neural circuit's state evolves over time to understand the circuit's underlying computation. While these \textit{neural dynamics} cannot be directly measured, new techniques attempt to estimate them by modeling observed neural recordings as a low-dimensional latent dynamical system embedded into a higher-dimensional neural space. How these models represent the readout from latent space to neural space can affect the interpretability of the latent representation -- for example, for models with a linear readout could make simple, low-dimensional dynamics unfolding on a non-linear neural manifold appear excessively complex and high-dimensional. Additionally, standard readouts (both linear and non-linear) often lack injectivity, meaning that they don't obligate changes in latent state to directly affect activity in the neural space. During training, non-injective readouts incentivize the model to invent dynamics that misrepresent the underlying system and computation. To address the challenges presented by non-linearity and non-injectivity, we combined a custom readout with a previously developed low-dimensional latent dynamics model to create the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN). We generated a synthetic spiking dataset by non-linearly embedding activity from a low-dimensional dynamical system into higher-D neural activity. We show that, in contrast to alternative models, ODIN is able to recover ground-truth latent activity from these data even when the nature of the system and embedding are unknown. Additionally, we show that ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry (e.g., the neural manifold) over alternative models. Overall, ODIN's ability to recover ground-truth latent features with low dimensionality make it a promising method for distilling interpretable dynamics that can explain neural computation.
Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity
[ "Christopher Versteeg", "Andrew Sedler", "Jonathan McCart", "Chethan Pandarinath" ]
Workshop/NeurReps
2309.06402
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=jdT7PuqdSt
@inproceedings{ shamsian2023data, title={Data Augmentations in Deep Weight Spaces}, author={Aviv Shamsian and David Zhang and Aviv Navon and Yan Zhang and Miltiadis Kofinas and Idan Achituve and Riccardo Valperga and Gertjan Burghouts and Efstratios Gavves and Cees Snoek and Ethan Fetaya and Gal Chechik and Haggai Maron}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=jdT7PuqdSt} }
Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization. Recent works designed architectures for effective learning in that space, which takes into account its unique, permutation-equivariant, structure. Unfortunately, so far these architectures suffer from severe overfitting and were shown to benefit from large datasets. This poses a significant challenge because generating data for this learning setup is laborious and time-consuming since each data sample is a full set of network weights that has to be trained. In this paper, we address this difficulty by investigating data augmentations for weight spaces, a set of techniques that enable generating new data examples on the fly without having to train additional input weight space elements. We first review several recently proposed data augmentation schemes and divide them into categories. We then introduce a novel augmentation scheme based on the Mixup method. We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate, which can be valuable for future studies.
Data Augmentations in Deep Weight Spaces
[ "Aviv Shamsian", "David Zhang", "Aviv Navon", "Yan Zhang", "Miltiadis Kofinas", "Idan Achituve", "Riccardo Valperga", "Gertjan Burghouts", "Efstratios Gavves", "Cees Snoek", "Ethan Fetaya", "Gal Chechik", "Haggai Maron" ]
Workshop/NeurReps
2311.08851
[ "" ]
https://huggingface.co/papers/2311.08851
1
0
0
13
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=gt9dDWc6GL
@inproceedings{ tipton2023haldane, title={Haldane Bundles: A Dataset for Learning to Predict the Chern Number of Line Bundles on the Torus}, author={Cody Tipton and Elizabeth Coda and Davis Brown and Alyson Bittner and Jung Lee and Grayson Jorgenson and Tegan Emerson and Henry Kvinge}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=gt9dDWc6GL} }
Characteristic classes, which are abstract topological invariants associated with vector bundles, have become an important notion in modern physics with surprising real-world consequences. As a representative example, the incredible properties of topological insulators, which are insulators in their bulk but conductors on their surface, can be completely characterized by a specific characteristic class associated with their electronic band structure, the first Chern class. Given their importance to next generation computing and the computational challenge of calculating them using first-principles approaches, there is a need to develop machine learning approaches to predict the characteristic classes associated with a material system. To aid in this program we introduce the *Haldane bundle dataset*, which consists of synthetically generated complex line bundles on the $2$-torus. We envision this dataset, which is not as challenging as noisy and sparsely measured real-world datasets but (as we show) still difficult for off-the-shelf architectures, to be a testing ground for architectures that incorporate the rich topological and geometric priors underlying characteristic classes.
Haldane Bundles: A Dataset for Learning to Predict the Chern Number of Line Bundles on the Torus
[ "Cody Tipton", "Elizabeth Coda", "Davis Brown", "Alyson Bittner", "Jung Lee", "Grayson Jorgenson", "Tegan Emerson", "Henry Kvinge" ]
Workshop/NeurReps
2312.04600
[ "https://github.com/shadtome/haldane-bundles" ]
https://huggingface.co/papers/2312.04600
0
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=fv0W1Yyg2v
@inproceedings{ ramesh2023how, title={How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks}, author={Rahul Ramesh and Mikail Khona and Robert P. Dick and Hidenori Tanaka and Ekdeep Singh Lubana}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=fv0W1Yyg2v} }
Transformers trained on huge text corpora exhibit a remarkable set of capabilities. Given the inherent compositional nature of language, one can expect the model to learn to compose these capabilities, potentially yielding a combinatorial explosion of what operations it can perform on an input. Motivated by the above, we aim to assess in this paper “how capable can a transformer become?”. In this work, we train Transformer models on a data-generating process that involves compositions of a set of well-defined monolithic capabilities and show that: (1) Transformers generalize to exponentially or even combinatorially many functions not seen in the training data; (2) composing functions by generating intermediate outputs is more effective at generalizing to unseen compositions; (3) the training data has a significant impact on the model’s ability to compose functions (4) Attention layers in the latter half of the model seem critical to compositionality.
How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks
[ "Rahul Ramesh", "Mikail Khona", "Robert P. Dick", "Hidenori Tanaka", "Ekdeep Singh Lubana" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=fjLD4U1MP7
@inproceedings{ gupta2023structurewise, title={Structure-wise Uncertainty for Curvilinear Image Segmentation}, author={Saumya Gupta and Xiaoling Hu and Chao Chen}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=fjLD4U1MP7} }
Segmenting curvilinear structures like blood vessels and roads poses significant challenges due to their intricate geometry and weak signals. To expedite large-scale annotation, it is essential to adopt semi-automatic methods such as proofreading by human experts. In this abstract, we focus on estimating uncertainty for such tasks, so that highly uncertain, and thus error-prone structures can be identified for human annotators to verify. Unlike prior work that generates pixel-wise uncertainty maps, we believe it is essential to measure uncertainty in the units of topological structures, e.g., small pieces of connections and branches. To realize this, we employ tools from topological data analysis, specifically discrete Morse theory (DMT), to first extract the structures and then reason about their uncertainties. On multiple 2D and 3D datasets, our methodology generates superior structure-wise uncertainty maps compared to existing models.
Structure-wise Uncertainty for Curvilinear Image Segmentation
[ "Saumya Gupta", "Xiaoling Hu", "Chao Chen" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eY6zf3mk4d
@inproceedings{ kvinge2023internal, title={Internal Representations of Vision Models Through the Lens of Frames on Data Manifolds}, author={Henry Kvinge and Grayson Jorgenson and Davis Brown and Charles Godfrey and Tegan Emerson}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=eY6zf3mk4d} }
While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain. This is especially true when trying to understand the impact of model design choices, such as model architecture or training algorithm, on hidden representation geometry and dynamics. In this work we present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold. Our construction, which we call a *neural frame*, is formed by assembling a set of vectors representing specific types of perturbations of a data point, for example infinitesimal augmentations, noise perturbations, or perturbations produced by a generative model, and studying how these change as they pass through a network. Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint. Our results provide new perspectives on a number of phenomena, such as the manner in which training with augmentation produces model invariance or the proposed trade-off between adversarial training and model generalization.
Internal Representations of Vision Models Through the Lens of Frames on Data Manifolds
[ "Henry Kvinge", "Grayson Jorgenson", "Davis Brown", "Charles Godfrey", "Tegan Emerson" ]
Workshop/NeurReps
2211.10558
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=e9JBa515z2
@inproceedings{ pegoraro2023spectral, title={Spectral Maps for Learning on Subgraphs}, author={Marco Pegoraro and Riccardo Marin and Arianna Rampini and Simone Melzi and Luca Cosmo and Emanuele Rodol{\`a}}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=e9JBa515z2} }
In graph learning, maps between graphs and their subgraphs frequently arise. For instance, when coarsening or rewiring operations are present along the pipeline, one needs to keep track of the corresponding nodes between the original and modified graphs. Classically, these maps are represented as binary node-to-node correspondence matrices, and used as-is to transfer node-wise features between the graphs. In this paper, we argue that simply changing this map representation can bring notable benefits to graph learning tasks. Drawing inspiration from recent progress in geometry processing, we introduce a spectral representation for maps that is easy to integrate into existing graph learning models. This spectral representation is a compact and straightforward plug-in replacement, and is robust to topological changes of the graphs. Remarkably, the representation exhibits structural properties that make it interpretable, drawing an analogy with recent results on smooth manifolds. We demonstrate the benefits of incorporating spectral maps in graph learning pipelines, addressing scenarios where a node-to-node map is not well defined, or in the absence of exact isomorphism. Our approach bears practical benefits in knowledge distillation and hierarchical learning, where we show comparable or improved performance at a fraction of the computational cost.
Spectral Maps for Learning on Subgraphs
[ "Marco Pegoraro", "Riccardo Marin", "Arianna Rampini", "Simone Melzi", "Luca Cosmo", "Emanuele Rodolà" ]
Workshop/NeurReps
2205.14938
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=e9EFqkfu2X
@inproceedings{ haan2023euclidean, title={Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers}, author={Pim De Haan and Taco Cohen and Johann Brehmer}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=e9EFqkfu2X} }
The Geometric Algebra Transformer (GATr) is a versatile architecture for geometric deep learning based on projective geometric algebra. We generalize this architecture into a blueprint that allows one to construct a scalable transformer architecture given any geometric (or Clifford) algebra. We study versions of this architecture for Euclidean, projective, and conformal algebras, all of which are suited to represent 3D data, and evaluate them in theory and practice. The simplest Euclidean architecture is computationally cheap, but has a smaller symmetry group and is not as sample-efficient, while the projective model is not sufficiently expressive. Both the conformal algebra and an improved version of the projective algebra define powerful, performant architectures.
Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers
[ "Pim De Haan", "Taco Cohen", "Johann Brehmer" ]
Workshop/NeurReps
2311.04744
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=dq53F97iVv
@inproceedings{ khajehnejad2023on, title={On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay}, author={Moein Khajehnejad and Forough Habibollahi and Alon Loeffler and Brett Kagan and Adeel Razi}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=dq53F97iVv} }
In this study, we focus on characterising the complex network dynamics of in vitro neuronal system of live biological cells during two distinct activity states: spontaneous rest state and engagement in a real-time (closed-loop) game environment. We use DishBrain which is a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array. First, we embed the spiking activity of these channels in a lower-dimensional space using various representation learning methods. We then extract a subset of representative channels that are consistent across all of the neuronal preparations. Next, by analyzing these low-dimensional representations, we explore the patterns of macroscopic neuronal network dynamics during the learning process. Remarkably, our findings indicate that just using the low-dimensional embedding of representative channels is sufficient to differentiate the neuronal culture during the Rest and Gameplay conditions. Furthermore, we characterise the evolving neuronal connectivity patterns within the Dish-Brain system over time during Gameplay in comparison to the Rest condition. Notably, our investigation shows dynamic changes in the overall connectivity within the same region and across multiple regions on the multi-electrode array only during Gameplay. These findings underscore the plasticity of these neuronal networks in response to external stimuli and highlight the potential for modulating connectivity in a controlled environment. The ability to distinguish between neuronal states using reduced-dimensional representations points to the presence of underlying patterns that could be pivotal for real-time monitoring and manipulation of neuronal cultures. Additionally, this provides insight into how biological based information processing systems rapidly adapt and learn and may lead to new or improved algorithms.
On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay
[ "Moein Khajehnejad", "Forough Habibollahi", "Alon Loeffler", "Brett Kagan", "Adeel Razi" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=dZbqejZB2V
@inproceedings{ gamba2023on, title={On the Varied Faces of Overparameterization in Supervised and Self-Supervised Learning}, author={Matteo Gamba and Arna Ghosh and Kumar Krishna Agrawal and Blake Aaron Richards and Hossein Azizpour and M{\r{a}}rten Bj{\"o}rkman}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=dZbqejZB2V} }
The quality of the representations learned by neural networks depends on several factors, including the loss function, learning algorithm, and model architecture. In this work, we use information geometric measures to assess the representation quality in a principled manner. We demonstrate that the sensitivity of learned representations to input perturbations, measured by the spectral norm of the feature Jacobian, provides valuable information about downstream generalization. On the other hand, measuring the coefficient of spectral decay observed in the eigenspectrum of feature covariance provides insights into the global representation geometry. First, we empirically establish an equivalence between these notions of representation quality and show that they are inversely correlated. Second, our analysis reveals the varying roles that overparameterization plays in improving generalization. Unlike supervised learning, we observe that increasing model width leads to higher discriminability and less smoothness in the self-supervised regime. Furthermore, we report that there is no observable double descent phenomenon in SSL with non-contrastive objectives for commonly used parameterization regimes, which opens up new opportunities for tight asymptotic analysis. Taken together, our results provide a loss-aware characterization of the different role of overparameterization in supervised and self-supervised learning.
On the Varied Faces of Overparameterization in Supervised and Self-Supervised Learning
[ "Matteo Gamba", "Arna Ghosh", "Kumar Krishna Agrawal", "Blake Aaron Richards", "Hossein Azizpour", "Mårten Björkman" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=dM8HXlBFJU
@inproceedings{ pegoraro2023geometric, title={Geometric Epitope and Paratope Prediction}, author={Marco Pegoraro and Cl{\'e}mentine Domin{\'e} and Emanuele Rodol{\`a} and Petar Veli{\v{c}}kovi{\'c} and Andreea Deac}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=dM8HXlBFJU} }
Antibody-antigen interactions play a crucial role in identifying and neutralizing harmful foreign molecules. In this paper, we investigate the optimal representation for predicting the binding sites in the two molecules and emphasize the importance of geometric information. Specifically, we compare different geometric deep learning methods applied to proteins’ inner (I-GEP) and outer (O-GEP) structures. We incorporate 3D coordinates and spectral geometric descriptors as input features to fully leverage the geometric information. Our research suggests that surface-based models are more efficient than other methods, and our O-GEP experiments have achieved state-of-the-art results with significant performance improvements.
Geometric Epitope and Paratope Prediction
[ "Marco Pegoraro", "Clémentine Dominé", "Emanuele Rodolà", "Petar Veličković", "Andreea Deac" ]
Workshop/NeurReps
2307.13608
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=d55JaRL9wh
@inproceedings{ kaba2023symmetry, title={Symmetry Breaking and Equivariant Neural Networks}, author={S{\'e}kou-Oumar Kaba and Siamak Ravanbakhsh}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=d55JaRL9wh} }
Using symmetry as an inductive bias in deep learning has been proven to be a principled approach for sample-efficient model design. However, the relationship between symmetry and the imperative for equivariance in neural networks is not always obvious. Here, we analyze a key limitation that arises in equivariant functions: their incapacity to break symmetry at the level of individual data samples. In response, we introduce a novel notion of 'relaxed equivariance' that circumvents this limitation. We further demonstrate how to incorporate this relaxation into equivariant multilayer perceptrons (E-MLPs), offering an alternative to the noise-injection method. The relevance of symmetry breaking is then discussed in various application domains: physics, graph representation learning, combinatorial optimization and equivariant decoding.
Symmetry Breaking and Equivariant Neural Networks
[ "Sékou-Oumar Kaba", "Siamak Ravanbakhsh" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=c9u8tH1WA0
@inproceedings{ sonthalia2023relwire, title={RelWire: Metric Based Graph Rewiring}, author={Rishi Sonthalia and Anna Gilbert and Matthew Durham}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=c9u8tH1WA0} }
Oversquashing is a major hurdle to the application of geometric deep learning and graph neural networks to real applications. Recent work has found connections between oversquashing and commute times, effective resistance, and the eigengap of the underlying graph. Graph rewiring is the most promising technique to alleviate this issue. Some prior work adds edges locally to highly negatively curved subgraphs. These local changes, however, have a small effect on global statistics such as commute times and the eigengap. Other prior work uses the spectrum of the graph Laplacian to target rewiring to increase the eigengap. These approaches, however, make large structural and topological changes to the underlying graph. We use ideas from geometric group theory to present \textsc{RelWire}, a rewiring technique based on the geometry of the graph. We derive topological connections for \textsc{RelWire}. We then rewire different real world molecule datasets and show that \textsc{RelWire} is Pareto optimal: it has the best balance between improvement in eigengap and commute times and minimizing changes in the topology of the underlying graph.
RelWire: Metric Based Graph Rewiring
[ "Rishi Sonthalia", "Anna Gilbert", "Matthew Durham" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZtAabWUPu3
@inproceedings{ he2023sheafbased, title={Sheaf-based Positional Encodings for Graph Neural Networks}, author={Yu He and Cristian Bodnar and Pietro Lio}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZtAabWUPu3} }
Graph Neural Networks (GNNs) work directly with graph-structured data, capitalising on relational information among entities. One limitation of GNNs is their reliance on local interactions among connected nodes. GNNs may generate identical node embeddings for similar local neighbourhoods and fail to distinguish structurally distinct graphs. Positional encodings help to break the locality constraint by informing the nodes of their global positions in the graph. Furthermore, they are required by Graph Transformers to encode structural information. However, existing positional encodings based on the graph Laplacian only encode structural information and are typically fixed. To address these limitations, we propose a novel approach to design positional encodings using sheaf theory. The sheaf Laplacian can be learnt from node data, allowing it to encode both the structure and semantic information. We present two methodologies for creating sheaf-based positional encodings, showcasing their efficacy in node and graph tasks. Our work advances the integration of sheaves in graph learning, paving the way for innovative GNN techniques that draw inspiration from geometry and topology.
Sheaf-based Positional Encodings for Graph Neural Networks
[ "Yu He", "Cristian Bodnar", "Pietro Lio" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZobkKCTaiY
@inproceedings{ li2023structural, title={Structural Similarities Between Language Models and Neural Response Measurements}, author={Jiaang Li and Antonia Karamolegkou and Yova Kementchedjhieva and Mostafa Abdou and Sune Lehmann and Anders S{\o}gaard}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZobkKCTaiY} }
Large language models have complicated internal dynamics, but induce representations of words and phrases whose geometry we can study. Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activations during listening or reading, from which we can extract similar representations of words and phrases. Here we study the extent to which the geometries induced by these representations, share similarities in the context of brain decoding. We find that the larger neural language models get, the more their representations are structurally similar to neural response measurements from brain imaging.
Structural Similarities Between Language Models and Neural Response Measurements
[ "Jiaang Li", "Antonia Karamolegkou", "Yova Kementchedjhieva", "Mostafa Abdou", "Sune Lehmann", "Anders Søgaard" ]
Workshop/NeurReps
2306.01930
[ "https://github.com/coastalcph/brain2llm" ]
https://huggingface.co/papers/2306.01930
0
2
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=ZLjUgDeGIC
@inproceedings{ zhou2023inrformer, title={{INRF}ormer: Neuron Permutation Equivariant Transformer on Implicit Neural Representations}, author={Lei Zhou and Varun Belagali and Joseph Bae and Prateek Prasanna and Dimitris Samaras}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZLjUgDeGIC} }
Implicit Neural Representations (INRs) have demonstrated both precision in continuous data representation and compactness in encapsulating high-dimensional data. Yet, much of contemporary research remains centered on data reconstruction using INRs, with limited exploration into processing INRs themselves. In this paper, we endeavor to develop a model tailored to process INRs explicitly for computer vision tasks. We conceptualize INRs as computational graphs with neurons as nodes and weights as edges. To process INR graphs, we propose INRFormer consisting of the node blocks and the edge blocks alternatively. Within the node block, we further propose SlidingLayerAttention (SLA), which performs attention on nodes of three sequential INR layers. This sliding mechanism of the SLA across INR layers enables each layer's nodes to access a broader scope of the entire graph's information. In terms of the edge block, every edge's feature vector gets concatenated with the features of its two linked nodes, followed by a projection via an MLP. Ultimately, we formulate the visual recognition as INR-to-INR (inr2inr) translations. That is, INRFormer transforms the input INR, which maps coordinates to image pixels, to a target INR, which maps the coordinates to the labels. We demonstrate INRFormer on CIFAR10.
INRFormer: Neuron Permutation Equivariant Transformer on Implicit Neural Representations
[ "Lei Zhou", "Varun Belagali", "Joseph Bae", "Prateek Prasanna", "Dimitris Samaras" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZFu7CPtznY
@inproceedings{ crisostomi2023from, title={From Charts to Atlas: Merging Latent Spaces into One}, author={Donato Crisostomi and Irene Cannistraci and Luca Moschella and Pietro Barbiero and Marco Ciccone and Pietro Lio and Emanuele Rodol{\`a}}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZFu7CPtznY} }
Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent spaces to create a unified space encompassing the combined information. To this end, we introduce Relative Latent Space Aggregation (RLSA), a two-step approach that first renders the spaces comparable using relative representations, and then aggregates them via a simple mean. We carefully divide a classification problem into a series of learning tasks under three different settings: sharing samples, classes, or neither. We then train a model on each task and aggregate the resulting latent spaces. We compare the aggregated space with that derived from an end-to-end model trained over all tasks and show that the two spaces are similar. We then observe that the aggregated space is better suited for classification, and empirically demonstrate that it is due to the unique imprints left by task-specific embedders within the representations. We finally test our framework in scenarios where no shared region exists and show that it can still be used to merge the spaces, albeit with diminished benefits over naive merging.
From Charts to Atlas: Merging Latent Spaces into One
[ "Donato Crisostomi", "Irene Cannistraci", "Luca Moschella", "Pietro Barbiero", "Marco Ciccone", "Pietro Lio", "Emanuele Rodolà" ]
Workshop/NeurReps
2311.06547
[ "https://github.com/crisostomi/latent-aggregation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XGFy3oFu7h
@inproceedings{ liu2023growing, title={Growing Brains in Recurrent Neural Networks for Multiple Cognitive Tasks}, author={Ziming Liu and Mikail Khona and Ila Fiete and Max Tegmark}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=XGFy3oFu7h} }
Recurrent neural networks (RNNs) trained on a diverse ensemble of cognitive tasks, as described by Yang et al (2019); Khona et al. (2023), have been shown to exhibit functional modularity, where neurons organize into discrete functional clusters, each specialized for specific shared computational subtasks. However, these RNNs do not demonstrate anatomical modularity, where these functionally specialized clusters also have a distinct spatial organization. This contrasts with the human brain which has both functional and anatomical modularity. Is there a way to train RNNs to make them more like brains in this regard? We apply a recent machine learning method, brain-inspired modular training (BIMT), to encourage neural connectivity to be local in space. Consequently, hidden neuron organization of the RNN forms spatial structures reminiscent of those of the brain: spatial clusters which correspond to functional clusters. Compared to standard $L_1$ regularization and absence of regularization, BIMT exhibits superior performance by optimally balancing between task performance and sparsity. This balance is quantified both in terms of the number of active neurons and the cumulative wiring length. In addition to achieving brain-like organization in RNNs, our findings also suggest that BIMT holds promise for applications in neuromorphic computing and enhancing the interpretability of neural network architectures.
Growing Brains in Recurrent Neural Networks for Multiple Cognitive Tasks
[ "Ziming Liu", "Mikail Khona", "Ila Fiete", "Max Tegmark" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UGJkxLNVGh
@inproceedings{ shen2023are, title={Are {\textquotedblleft}Hierarchical{\textquotedblright} Visual Representations Hierarchical?}, author={Ethan Shen and Ali Farhadi and Aditya Kusupati}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=UGJkxLNVGh} }
Learned visual representations often capture large amounts of semantic information for accurate downstream applications. Human understanding of the world is fundamentally grounded in hierarchy. To mimic this and further improve representation capabilities, the community has explored "hierarchical'' visual representations that aim at modeling the underlying hierarchy of the visual world. In this work, we set out to investigate if hierarchical visual representations truly capture the human perceived hierarchy better than standard learned representations. To this end, we create HierNet, a suite of 12 datasets spanning 3 kinds of hierarchy from the BREEDs subset of ImageNet. After extensive evaluation of Hyperbolic and Matryoshka Representations across training setups, we conclude that they do not capture hierarchy any better than the standard representations but can assist in other aspects like search efficiency and interpretability. Our benchmark and the datasets are open-sourced at https://github.com/ethanlshen/HierNet.
Are “Hierarchical” Visual Representations Hierarchical?
[ "Ethan Shen", "Ali Farhadi", "Aditya Kusupati" ]
Workshop/NeurReps
[ "https://github.com/ethanlshen/hiernet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TOLUNEz5kI
@inproceedings{ briola2023homological, title={Homological Convolutional Neural Networks}, author={Antonio Briola and Yuanrong Wang and Silvia Bartolucci and Tomaso Aste}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=TOLUNEz5kI} }
Deep learning methods have demonstrated outstanding performances on classification and regression tasks on homogeneous data types (e.g., image, audio, and text data). However, tabular data still pose a challenge, with classic machine learning approaches being often computationally cheaper and equally effective than increasingly complex deep learning architectures. The challenge arises from the fact that, in tabular data, the correlation among features is weaker than the one from spatial or semantic relationships in images or natural language, and the dependency structures need to be modeled without any prior information. In this work, we propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations to gain relational information from sparse tabular inputs. The resulting model leverages the power of convolution and is centered on a limited number of concepts from network topology to guarantee: (i) a data-centric and deterministic building pipeline; (ii) a high level of interpretability over the inference process; and (iii) an adequate room for scalability. We test our model on $18$ benchmark datasets against $5$ classic machine learning and $3$ deep learning models, demonstrating that our approach reaches state-of-the-art performances on these challenging datasets. The code to reproduce all our experiments is provided at https://github.com/FinancialComputingUCL/HomologicalCNN.
Homological Convolutional Neural Networks
[ "Antonio Briola", "Yuanrong Wang", "Silvia Bartolucci", "Tomaso Aste" ]
Workshop/NeurReps
2308.13816
[ "https://github.com/financialcomputingucl/homologicalcnn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TF2RcrcTP2
@inproceedings{ shewmake2023visual, title={Visual Scene Representation with Hierarchical Equivariant Sparse Coding}, author={Christian A Shewmake and Domas Buracas and Hansen Lillemark and Jinho Shin and Erik J Bekkers and Nina Miolane and Bruno Olshausen}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=TF2RcrcTP2} }
We propose a hierarchical neural network architecture for unsupervised learning of equivariant part-whole decompositions of visual scenes. In contrast to the global equivariance of group-equivariant networks, the proposed architecture exhibits equivariance to part-whole transformations throughout the hierarchy, which we term hierarchical equivariance. The model achieves such internal representations via hierarchical Bayesian inference, which gives rise to rich bottom-up, top-down, and lateral information flows, hypothesized to underlie the mechanisms of perceptual inference in visual cortex. We demonstrate these useful properties of the model on a simple dataset of scenes with multiple objects under independent rotations and translations.
Visual Scene Representation with Hierarchical Equivariant Sparse Coding
[ "Christian A Shewmake", "Domas Buracas", "Hansen Lillemark", "Jinho Shin", "Erik J Bekkers", "Nina Miolane", "Bruno Olshausen" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QcxL26Y23o
@inproceedings{ cannistraci2023from, title={From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication}, author={Irene Cannistraci and Luca Moschella and Marco Fumero and Valentino Maiorca and Emanuele Rodol{\`a}}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=QcxL26Y23o} }
It has been observed that representations learned by distinct neural networks conceal structural similarities when the models are trained under similar inductive biases. From a geometric perspective, identifying the classes of transformations and the related invariances that connect these representations is fundamental to unlocking applications, such as merging, stitching, and reusing different neural modules. However, estimating task-specific transformations a priori can be challenging and expensive due to several factors (e.g., weights initialization, training hyperparameters, or data modality). To this end, we introduce a versatile method to directly incorporate a set of invariances into the representations, constructing a product space of invariant components on top of the latent representations without requiring prior knowledge about the optimal invariance to infuse. We validate our solution on classification and reconstruction tasks, observing consistent latent similarity and downstream performance improvements in a zero-shot stitching setting. The experimental analysis comprises three modalities (vision, text, and graphs), twelve pretrained foundational models, eight benchmarks, and several architectures trained from scratch.
From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication
[ "Irene Cannistraci", "Luca Moschella", "Marco Fumero", "Valentino Maiorca", "Emanuele Rodolà" ]
Workshop/NeurReps
2310.01211
[ "" ]
https://huggingface.co/papers/2310.01211
0
0
0
5
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=NWyf3wb330
@inproceedings{ han2023symmetrybased, title={Symmetry-based Learning of Radiance Fields for Rigid Objects}, author={Zhiwei Han and Stefan Matthes and Hao Shen and Yuanting Liu}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=NWyf3wb330} }
In this work, we present SymObjectRF, a symmetry-based method that learns object-centric representations for rigid objects from one dynamic scene without hand-crafted annotations. SymObjectRF learns the appearance and surface geometry of all dynamic object in their canonical poses and represents individual object within its canonical pose using a canonical object field (COF). SymObjectRF imposes group equivariance on rendering pipeline by transforming 3D point samples from world coordinate to object canonical poses. Subsequently, a permutation-invariant compositional renderer combines the color and density values queried from the learned COFs and reconstructs the input scene via volume rendering. SymObjectRF is then optimized by minimizing scene reconstruction loss. We show the feasibility of SymObjectRF in learning object-centric representations both theoretically and empirically.
Symmetry-based Learning of Radiance Fields for Rigid Objects
[ "Zhiwei Han", "Stefan Matthes", "Hao Shen", "Yuanting Liu" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=NJS6568y79
@inproceedings{ ballester2023decorrelating, title={Decorrelating neurons using persistence}, author={Rub{\'e}n Ballester and Carles Casacuberta and Sergio Escalera}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=NJS6568y79} }
We propose a novel way to regularise deep learning models by reducing high correlations between neurons. For this, we present two regularisation terms computed from the weights of a minimum spanning tree of the clique whose vertices are the neurons of a given network (or a sample of those), where weights on edges are correlation dissimilarities. We explore their efficacy by performing a set of proof-of-concept experiments, for which our new regularisation terms outperform some popular ones. We demonstrate that, in these experiments, naive minimisation of all correlations between neurons obtains lower accuracies than our regularisation terms. This suggests that redundancies play a significant role in artificial neural networks, as evidenced by some studies in neuroscience for real networks. We include a proof of differentiability of our regularisers, thus developing the first effective topological persistence-based regularisation terms that consider the whole set of neurons and that can be applied to a feedforward architecture in any deep learning task such as classification, data generation, or regression.
Decorrelating neurons using persistence
[ "Rubén Ballester", "Carles Casacuberta", "Sergio Escalera" ]
Workshop/NeurReps
2308.04870
[ "https://github.com/rballeba/decorrelatingneuronsusingpersistence" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Mrssld4TyD
@inproceedings{ geng2023scalar, title={Scalar Invariant Networks with Zero Bias}, author={Chuqin Geng and Xiaojie Xu and Haolin Ye and Xujie Si}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=Mrssld4TyD} }
Just like weights, bias terms are learnable parameters in many popular machine learning models, including neural networks. Biases are believed to enhance the representational power of neural networks, enabling them to tackle various tasks in computer vision. Nevertheless, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our empirical results suggest that zero-bias neural networks can perform comparably to normal networks for practical image classification tasks. Furthermore, we demonstrate that zero-bias neural networks possess a valuable property known as scalar (multiplicative) invariance. This implies that the network's predictions remain unchanged even when the contrast of the input image is altered. We further extend the scalar invariance property to more general cases, thereby attaining robustness within specific convex regions of the input space. We believe dropping bias terms can be considered as a geometric prior when designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the translational invariance prior.
Scalar Invariant Networks with Zero Bias
[ "Chuqin Geng", "Xiaojie Xu", "Haolin Ye", "Xujie Si" ]
Workshop/NeurReps
2211.08486
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Mo5qZaBl8v
@inproceedings{ nguyen2023fast, title={Fast Temporal Wavelet Graph Neural Networks}, author={Duc Thien Nguyen and Tuan Nguyen and Truong Son Hy and Risi Kondor}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=Mo5qZaBl8v} }
Spatio-temporal signals forecasting plays an important role in numerous domains, especially in neuroscience and transportation. The task is challenging due to the highly intricate spatial structure, as well as the non-linear temporal dynamics of the network. To facilitate reliable and timely forecast for the human brain and traffic networks, we propose the Fast Temporal Wavelet Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for learning tasks on timeseries data with the underlying graph structure, thanks to the theories of multiresolution analysis and wavelet theory on discrete spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al., 2014) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis that allows us to construct fast wavelet convolution as the backbone of our novel architecture. Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts while maintaining a low computational footprint. Our PyTorch implementation is publicly available at https://github.com/HySonLab/TWGNN
Fast Temporal Wavelet Graph Neural Networks
[ "Duc Thien Nguyen", "Tuan Nguyen", "Truong Son Hy", "Risi Kondor" ]
Workshop/NeurReps
2302.08643
[ "https://github.com/hysonlab/twgnn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LQoejMxeiv
@inproceedings{ kelshaw2023manifoldaugmented, title={Manifold-augmented Eikonal Equations: Geodesic Distances and Flows on Differentiable Manifolds.}, author={Daniel Kelshaw and Luca Magri}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=LQoejMxeiv} }
Manifolds discovered by machine learning models provide a compact representation of the underlying data. Geodesics on these manifolds define locally length-minimising curves and provide a notion of distance, which are key for reduced-order modelling, statistical inference, and interpolation. In this work, we propose a model-based parameterisation for distance fields and geodesic flows on manifolds, exploiting solutions of a manifold-augmented Eikonal equation. We demonstrate how the geometry of the manifold impacts the distance field, and exploit the geodesic flow to obtain globally length-minimising curves directly. This work opens opportunities for statistics and reduced-order modelling on differentiable manifolds.
Manifold-augmented Eikonal Equations: Geodesic Distances and Flows on Differentiable Manifolds.
[ "Daniel Kelshaw", "Luca Magri" ]
Workshop/NeurReps
[ "https://github.com/danielkelshaw/riemax" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=KWUA0n6Dpv
@inproceedings{ suresh2023pitfalls, title={Pitfalls in Measuring Neural Transferability}, author={Suryaka Suresh and Vinayak Abrol and Anshul Thakur}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=KWUA0n6Dpv} }
Transferability scores quantify the aptness of the pre-trained models for a downstream task and help in selecting an optimal pre-trained model for transfer learning. This work aims to draw attention to the significant shortcomings of state-of-the-art transferability scores. To this aim, we propose neural collapse-based transferability scores that analyse intra-class variability collapse and inter-class discriminative ability of the penultimate embedding space of a pre-trained model. The experimentation across the image and audio domains demonstrates that such a simple variability analysis of the feature space is more than enough to satisfy the current definition of transferability scores, and there is a requirement for a new generic definition of transferability. Further, building on these results, we highlight new research directions and postulate characteristics of an ideal transferability measure that will be helpful in streamlining future studies targeting this problem.
Pitfalls in Measuring Neural Transferability
[ "Suryaka Suresh", "Vinayak Abrol", "Anshul Thakur" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=IqUVsae1iK
@inproceedings{ mansfield2023random, title={Random Field Augmentations for Self-Supervised Representation Learning}, author={Philip Mansfield and Arash Afkanpour and Warren Morningstar and Karan Singhal}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=IqUVsae1iK} }
Self-supervised representation learning is heavily dependent on data augmentations to specify the invariances encoded in representations. Previous work has shown that applying diverse data augmentations is crucial to downstream performance, but augmentation techniques remain under-explored. In this work, we propose a new family of local transformations based on Gaussian random fields to generate image augmentations for self-supervised representation learning. These transformations generalize the well-established affine and color transformations (translation, rotation, color jitter, etc.) and greatly increase the space of augmentations by allowing transformation parameter values to vary from pixel to pixel. The parameters are treated as continuous functions of spatial coordinates, and modeled as independent Gaussian random fields. Empirical results show the effectiveness of the new transformations for self-supervised representation learning. Specifically, we achieve a 1.7% top-1 accuracy improvement over baseline on ImageNet downstream classification, and a 3.6% improvement on out-of-distribution iNaturalist downstream classification. However, due to the flexibility of the new transformations, learned representations are sensitive to hyperparameters. While mild transformations improve representations, we observe that strong transformations can degrade the structure of an image, indicating that balancing the diversity and strength of augmentations is important for improving generalization of learned representations.
Random Field Augmentations for Self-Supervised Representation Learning
[ "Philip Mansfield", "Arash Afkanpour", "Warren Morningstar", "Karan Singhal" ]
Workshop/NeurReps
2311.03629
[ "" ]
https://huggingface.co/papers/2311.03629
1
6
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=GX4axrya0A
@inproceedings{ yang2023changes, title={Changes in the geometry of hippocampal representations across brain states}, author={Wannan Yang and Chen Sun and Gyorgy Buzsaki}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=GX4axrya0A} }
The hippocampus (HPC) is a key structure of the brain's capacity to learn and generalize. One pervasive phenomenon in the brain, but missing in AI, is the presence of different gross brain states. It is known that these different brain states give rise to diverse modes of information processing that are imperative for hippocampus to learn and function, but the mechanisms by which they do so remain unknown. To study this, we harnessed the power of recently developed dimensionality reduction techniques to shed insight on how HPC representations change across brain states. We compared the geometry of HPC neuronal representations when rodents learn to generalize across different environments, and showed that HPC representation could support both pattern separation and generalization. Next, we compared HPC activity during different stages of sleep. Consistent with the literature, we found a robust recapitulation of the previous awake experience during non rapid eye movement sleep (NREM). But interestingly, such geometric correspondence to previous awake experience was not observed during rapid eye movement sleep (REM), suggesting a very different mode of information processing. This is the first known report of UMAP analysis on hippocampal neuronal data during REM sleep. We propose that characterizing and contrasting the geometry of hippocampal representations during different brain states can help understand the brain's mechanisms for learning, and in the future, can even help design next generation of AI that learn and generalize better.
Changes in the geometry of hippocampal representations across brain states
[ "Wannan Yang", "Chen Sun", "Gyorgy Buzsaki" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GUNTnnd4Hw
@inproceedings{ li2023entropymcmc, title={Entropy-{MCMC}: Sampling from Flat Basins with Ease}, author={Bolian Li and Ruqi Zhang}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=GUNTnnd4Hw} }
Bayesian deep learning counts on the quality of posterior distribution estimation. However, the posterior of deep neural networks is highly multi-modal in nature, with local modes exhibiting varying generalization performances. Given a practical budget, sampling from the original posterior can lead to suboptimal performances, as some samples may become trapped in "bad" modes and suffer from overfitting. Leveraging the observation that "good" modes with low generalization error often reside in flat basins of the energy landscape, we propose to bias the sampling on the posterior toward these flat regions. Specifically, we introduce an auxiliary guiding variable, the stationary distribution of which resembles a smoothed posterior free from sharp modes, to lead the MCMC sampler to flat basins. We prove the convergence of our method and further show that it converges faster than several existing flatness-aware methods in the strongly convex setting. Empirical results demonstrate that our method can successfully sample from flat basins of the posterior, and outperforms all compared baselines on multiple benchmarks including classification, calibration and out-of-distribution detection.
Entropy-MCMC: Sampling from Flat Basins with Ease
[ "Bolian Li", "Ruqi Zhang" ]
Workshop/NeurReps
2310.05401
[ "https://github.com/lblaoke/emcmc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=EoyeHdfJ6l
@inproceedings{ maurel2023rototranslation, title={Roto-translation Equivariant {YOLO} for Aerial Images}, author={Benjamin Maurel and Samy Blusseau and Santiago Velasco-Forero and Teodora Petrisor}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=EoyeHdfJ6l} }
This work introduces Eq-YOLO, an Equivariant One-Stage Object Detector based on YOLO-v8 incorporating group convolutions to handle rotational transformations. We show the interest of using equivariant-transforms to improve the detection performance on rotated data over the regular YOLO-v8 model while dividing the number of parameters to train by a factor greater than three.
Roto-translation Equivariant YOLO for Aerial Images
[ "Benjamin Maurel", "Samy Blusseau", "Santiago Velasco-Forero", "Teodora Petrisor" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=DPkBeXZV7a
@inproceedings{ sbail{\`o}2023emergence, title={Emergence of Latent Binary Encoding in Deep Neural Network Classifiers}, author={Luigi Sbail{\`o} and Luca Ghiringhelli}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=DPkBeXZV7a} }
We observe the emergence of binary encoding within the latent space of deep-neural-network classifiers. Such binary encoding is induced by introducing a linear penultimate layer, which is equipped during training with a loss function that grows as $\exp(\vec{x}^2)$, where $\vec{x}$ are the coordinates in the latent space. The phenomenon we describe represents a specific instance of a well-documented occurrence known as \textit{neural collapse}, which arises in the terminal phase of training and entails the collapse of latent class means to the vertices of a simplex equiangular tight frame (ETF). We show that binary encoding accelerates convergence toward the simplex ETF and enhances classification accuracy.
Emergence of Latent Binary Encoding in Deep Neural Network Classifiers
[ "Luigi Sbailò", "Luca Ghiringhelli" ]
Workshop/NeurReps
2310.08224
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=CwJIpWzgDP
@inproceedings{ schaeffer2023testing, title={Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells}, author={Rylan Schaeffer and Mikail Khona and Adrian Bertagnoli and Sanmi Koyejo and Ila Fiete}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=CwJIpWzgDP} }
Representing and reasoning about physical space is fundamental to animal survival, and the mammalian lineage expresses a wealth of specialized neural representations that encode space. Grid cells, whose discovery earned a Nobel prize, are a striking example: a grid cell is a neuron that fires if and only if the animal is spatially located at the vertices of a regular triangular lattice that tiles all explored two-dimensional environments. Significant theoretical work has gone into understanding why mammals have learned these particular representations, and recent work has proposed a ``unified theory for the computational and mechanistic origin of grid cells," claiming to answer why the mammalian lineage has learned grid cells. However, the Unified Theory makes a series of highly specific assumptions about the target readouts of grid cells - putatively place cells. In this work, we explicitly identify what these mathematical assumptions are, then test two of the critical assumptions using biological place cell data. At both the population and single-cell levels, we find evidence suggesting that neither of the assumptions are likely true in biological neural representations. These results call the Unified Theory into question, suggesting that biological grid cells likely have a different origin than those obtained in trained artificial neural networks.
Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells
[ "Rylan Schaeffer", "Mikail Khona", "Adrian Bertagnoli", "Sanmi Koyejo", "Ila Fiete" ]
Workshop/NeurReps
2311.16295
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=CeG8jzTL2k
@inproceedings{ granberry2023soequivariant, title={{SO}(3)-Equivariant Representation Learning in 2D Images}, author={Darnell Granberry and Alireza Nasiri and Jiayi Shou and Alex J Noble and Tristan Bepler}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=CeG8jzTL2k} }
Imaging physical objects that are free to rotate and translate in 3D is challenging. While an object’s pose and location do not change its nature, varying them presents problems for current vision models. Equivariant models account for these nuisance transformations, but current architectures only model either 2D transformations of 2D signals or 3D trans- formations of 3D signals. Here, we propose a novel convolutional layer consisting of 2D projections of 3D filters that models 3D equivariances of 2D signals—critical for capturing the full space of spatial transformations of objects in imaging domains such as cryo-EM. We additionally present methods for aggregating our rotation-specific outputs. We demonstrate improvement on several tasks, including particle picking and pose estimation.
SO(3)-Equivariant Representation Learning in 2D Images
[ "Darnell Granberry", "Alireza Nasiri", "Jiayi Shou", "Alex J Noble", "Tristan Bepler" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=BapLxMxUIm
@inproceedings{ tegn{\'e}r2023selfsupervised, title={Self-Supervised Latent Symmetry Discovery via Class-Pose Decomposition}, author={Gustaf Tegn{\'e}r and Hedvig Kjellstrom}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=BapLxMxUIm} }
In this paper, we explore the discovery of latent symmetries of data in a self-supervised manner. By considering sequences of observations undergoing uniform motion, we can extract a shared group transformation from the latent observations. In contrast to previous work, we utilize a latent space in which the group and orbit component are decomposed. We show that this construction facilitates more accurate identification of the properties of the underlying group, which consequently results in an improved performance on a set of sequential prediction tasks.
Self-Supervised Latent Symmetry Discovery via Class-Pose Decomposition
[ "Gustaf Tegnér", "Hedvig Kjellstrom" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=B3bxgPqxGT
@inproceedings{ d{\"o}nmez2023discovering, title={Discovering Latent Causes and Memory Modification: A Computational Approach Using Symmetry and Geometry}, author={Arif D{\"o}nmez}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=B3bxgPqxGT} }
We learn from our experiences, even though they are never exactly the same. This implies that we need to assess their similarity to apply what we have learned from one experience to another. It is proposed that we “cluster” our experiences based on hidden latent causes that we infer. It is also suggested that surprises, which occur when our predictions are incorrect, help us categorize our experiences into distinct groups. In this paper, we develop a computational theory that emulates these processes based on two basic concepts from intuitive physics and Gestalt psychology using symmetry and geometry. We apply our approach to simple tasks that involve inductive reasoning. Remarkably, the output of our computational approach aligns closely with human responses.
Discovering Latent Causes and Memory Modification: A Computational Approach Using Symmetry and Geometry
[ "Arif Dönmez" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ApeIFsnRvk
@inproceedings{ joseph2023on, title={On the Information Geometry of Vision Transformers}, author={Sonia Joseph and Kumar Krishna Agrawal and Arna Ghosh and Blake Aaron Richards}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ApeIFsnRvk} }
Understanding the structure of high-dimensional representations learned by Vision Transformers (ViTs) provides a pathway toward developing a mechanistic understanding and further improving architecture design. In this work, we leverage tools from information geometry to characterize representation quality at a per-token (intra-token) level as well as across pairs of tokens (inter-token) in ViTs pretrained for object classification. In particular, we observe that these high-dimensional tokens exhibit a characteristic spectral decay in the feature covariance matrix. By measuring the rate of this decay (denoted by $\alpha$) for each token across transformer blocks, we discover an $\alpha$ signature, indicative of a transition from lower to higher effective dimensionality. We also demonstrate that tokens can be clustered based on their $\alpha$ signature, revealing that tokens corresponding to nearby spatial patches of the original image exhibit similar $\alpha$ trajectories. Furthermore, for measuring the complexity at the sequence level, we aggregate the correlation between pairs of tokens independently at each transformer block. A higher average correlation indicates a significant overlap between token representations and lower effective complexity. Notably, we observe a U-shaped trend across the model hierarchy, suggesting that token representations are more expressive in the intermediate blocks. Our findings provide a framework for understanding information processing in ViTs while providing tools to prune/merge tokens across blocks, thereby making the architectures more efficient.
On the Information Geometry of Vision Transformers
[ "Sonia Joseph", "Kumar Krishna Agrawal", "Arna Ghosh", "Blake Aaron Richards" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9kxkYY7B3j
@inproceedings{ venkatesh2023the, title={The Variability of Representations in Mice and Humans Changes with Learning, Engagement, and Attention}, author={Praveen Venkatesh and Corbett C Bennett and Sam Gale and Juri Minxha and Hristos Courellis and Greggory Robert Heller and Tamina Keira Ramirez and Severine Durand and Ueli Rutishauser and Shawn R Olsen and Stefan Mihalas}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=9kxkYY7B3j} }
In responding to a visual stimulus, cortical neurons exhibit a high degree of variability, and this variability can be correlated across neurons. In this study, we use recordings from both mice and humans to systematically characterize how the variability in the representation of visual stimuli changes with learning, engagement and attention. We observe that in mice, familiarization with a set of images over many weeks reduces the variability of responses, but does not change its shape. Further, switching from passive to active task engagement changes the overall shape by shrinking the neural variability only along the task-relevant direction, leading to a higher signal-to-noise ratio. In a selective attention task in humans wherein multiple distributions are compared, a higher signal-to-noise ratio is obtained via a different mechanism, by mainly increasing the signal of the attended category. These findings show that representation variability can be adjusted with task needs. A potential speculative role for variability, consistent with these findings, is that it helps generalization.
The Variability of Representations in Mice and Humans Changes with Learning, Engagement, and Attention
[ "Praveen Venkatesh", "Corbett C Bennett", "Sam Gale", "Juri Minxha", "Hristos Courellis", "Greggory Robert Heller", "Tamina Keira Ramirez", "Severine Durand", "Ueli Rutishauser", "Shawn R Olsen", "Stefan Mihalas" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9TQE2xGCbf
@inproceedings{ walker2023explicit, title={Explicit Neural Surfaces: Learning Continuous Geometry with Deformation Fields}, author={Thomas Walker and Octave Mariotti and Amir Vaxman and Hakan Bilen}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=9TQE2xGCbf} }
We introduce Explicit Neural Surfaces (ENS), an efficient smooth surface representation that directly encodes topology with a deformation field from a known base domain. We apply this representation to reconstruct explicit surfaces from multiple views, where we use a series of neural deformation fields to progressively transform the base domain into a target shape. By using meshes as discrete surface proxies, we train the deformation fields through efficient differentiable rasterization. Using a fixed base domain allows us to have Laplace-Beltrami eigenfunctions as an intrinsic positional encoding alongside standard extrinsic Fourier features, with which our approach can capture fine surface details. Compared to implicit surfaces, ENS trains faster and has several orders of magnitude faster inference times. The explicit nature of our approach also allows higher-quality mesh extraction whilst maintaining competitive surface reconstruction performance and real-time capabilities.
Explicit Neural Surfaces: Learning Continuous Geometry with Deformation Fields
[ "Thomas Walker", "Octave Mariotti", "Amir Vaxman", "Hakan Bilen" ]
Workshop/NeurReps
2306.02956
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9G0e8QrpxP
@inproceedings{ kohler2023symmetric, title={Symmetric Models for Radar Response Modeling}, author={Colin Kohler and Nathan Vaska and Ramya Muthukrishnan and Whangbong Choi and Jung Yeon Park and Justin Goodwin and Rajmonda Caceres and Robin Walters}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=9G0e8QrpxP} }
Many radar applications require complex radar signature models that incorporate characteristics of an object's shape and dynamics as well as sensing effects. Even though high-fidelity, first-principles radar simulators are available, they tend to be resource-intensive and do not easily support the requirements of agile and large-scale AI development and evaluation frameworks. Deep learning represents an attractive alternative to these numerical methods, but can have large data requirements and limited generalization ability. In this work, we present the Radar Equivariant Model (REM), the first $SO(3)$-equivaraint model for predicting radar responses from object meshes. By constraining our model to the symmetries inherent to radar sensing, REM is able to achieve a high level reconstruction of signals generated by a first-principles radar model and shows improved performance and sample efficiency over other encoder-decoder models.
Symmetric Models for Radar Response Modeling
[ "Colin Kohler", "Nathan Vaska", "Ramya Muthukrishnan", "Whangbong Choi", "Jung Yeon Park", "Justin Goodwin", "Rajmonda Caceres", "Robin Walters" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=8zWcBUoeR6
@inproceedings{ wang2023the, title={The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry}, author={Dian Wang and Jung Yeon Park and Neel Sortur and Lawson Wong and Robin Walters and Robert Platt}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=8zWcBUoeR6} }
Extensive work has demonstrated that equivariant neural networks can significantly improve sample efficiency and generalization by enforcing an inductive bias in the network architecture. These applications typically assume that the domain symmetry is fully described by explicit transformations of the model inputs and outputs. However, many real-life applications contain only latent or partial symmetries which cannot be easily described by simple transformations of the input. In these cases, it is necessary to \emph{learn} symmetry in the environment instead of imposing it mathematically on the network architecture. We discover, surprisingly, that imposing equivariance constraints that do not exactly match the domain symmetry is very helpful in learning the true symmetry in the environment. We differentiate between \emph{extrinsic} and \emph{incorrect} symmetry constraints and show that while imposing incorrect symmetry can impede the model's performance, imposing extrinsic symmetry can actually improve performance. We demonstrate that an equivariant model can significantly outperform non-equivariant methods on domains with latent symmetries.
The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry
[ "Dian Wang", "Jung Yeon Park", "Neel Sortur", "Lawson Wong", "Robin Walters", "Robert Platt" ]
Workshop/NeurReps
2211.09231
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=8NA7a1mCdu
@inproceedings{ christiansen2023large, title={Large language models partially converge toward human-like concept organization}, author={Jonathan Gabel Christiansen and Mathias Gammelgaard and Anders S{\o}gaard}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=8NA7a1mCdu} }
Large language models show human-like performance in knowledge extraction, reasoning and dialogue, but it remains controversial whether this performance is best explained by memorization and pattern matching, or whether it reflects human-like inferential semantics and world knowledge. Knowledge bases such as WikiData provide large-scale, high-quality representations of inferential semantics and world knowledge. We show that large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in such knowledge bases. Knowledge bases model collective, institutional knowledge, and large language models seem to induce such knowledge from raw text. We show that bigger and better models exhibit more human-like concept organization, across four families of language models and three knowledge graph embeddings.
Large language models partially converge toward human-like concept organization
[ "Jonathan Gabel Christiansen", "Mathias Gammelgaard", "Anders Søgaard" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7jHGa1nS47
@inproceedings{ wilson2023cayley, title={Cayley Graph Propagation}, author={JJ Wilson and Petar Veli{\v{c}}kovi{\'c}}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=7jHGa1nS47} }
In spite of the plethora of success stories with graph neural networks (GNNs) on modelling graph-structured data, they are notoriously vulnerable to tasks which necessitate mixing of information between distant pairs of nodes, especially in the presence of bottlenecks in the graph. For this reason, a significant body of research has dedicated itself to discovering or pre-computing graph structures which ameliorate such bottlenecks. Bottleneck-free graphs are well-known in the mathematical community as *expander graphs*, with prior work—Expander Graph Propagation (EGP)—proposing the use of a well-known expander graph family—the Cayley graphs of the $\mathrm{SL}(2,\mathbb{Z}_n)$ special linear group—as a computational template for GNNs. However, despite its solid theoretical grounding, the actual computational graphs used by EGP are *truncated* Cayley graphs, which causes them to lose expansion properties. In this work, we propose to use the full Cayley graph within EGP, recovering significant improvements on datasets from the Open Graph Benchmark (OGB). Our empirical evidence suggests that the retention of the nodes in the expander graph can provide benefit for graph representation learning, which may provide valuable insight for future models.
Cayley Graph Propagation
[ "JJ Wilson", "Petar Veličković" ]
Workshop/NeurReps
2410.03424
[ "https://github.com/josephjwilson/cayley_graph_propagation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5W9so5v0OU
@inproceedings{ mochizuki-freeman2023geometry, title={Geometry of abstract learned knowledge in deep {RL} agents}, author={James Mochizuki-Freeman and Md Rysul Kabir and Mitesh Gulecha and Zoran Tiganj}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=5W9so5v0OU} }
Data from neural recordings suggest that mammalian brains represent physical and abstract task-relevant variables through low-dimensional neural manifolds. In a recent electrophysiological study (Nieh et al., 2021), mice performed an evidence accumulation task while moving along a virtual track. Nonlinear dimensionality reduction of the population activity revealed that task-relevant variables were jointly mapped in an orderly manner in the low-dimensional space. Here we trained deep reinforcement learning (RL) agents on the same evidence accumulation task and found that their neural activity can be described with a low-dimensional manifold spanned by task-relevant variables. These results provide further insight into similarities and differences between neural dynamics in mammals and deep RL agents. Furthermore, we showed that manifold learning can be used to characterize the representational space of the RL agents with the potential to improve the interpretability of decision-making in RL.
Geometry of abstract learned knowledge in deep RL agents
[ "James Mochizuki-Freeman", "Md Rysul Kabir", "Mitesh Gulecha", "Zoran Tiganj" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=4OSJeCAMi6
@inproceedings{ han2023curvature, title={Curvature Fields from Shading Fields}, author={Xinran Han and Todd Zickler}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=4OSJeCAMi6} }
We re-examine the estimation of 3D shape from images that are caused by shading of diffuse Lambertian surfaces. We propose a neural model that is motivated by the well-documented perceptual effect in which shape is perceived from shading without a precise perception of lighting. Our model operates independently in each receptive field and produces a scalar statistic of surface curvature for that field. The model’s architecture builds on previous mathematical analyses of lighting-invariant shape constraints, and it leverages geometric structure to provide equivariance under image rotations and translations. Applying our model in parallel across a dense set of receptive fields produces a curvature field that we show is quite stable under changes to a surface’s albedo pattern (texture) and also to changes in lighting, even when lighting varies spatially across the surface.
Curvature Fields from Shading Fields
[ "Xinran Han", "Todd Zickler" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3ItzNHPov9
@inproceedings{ klee2023a, title={A Comparison of Equivariant Vision Models with ImageNet Pre-training}, author={David Klee and Jung Yeon Park and Robert Platt and Robin Walters}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=3ItzNHPov9} }
Neural networks pre-trained on large datasets provide useful embeddings for downstream tasks and allow researchers to iterate with less compute. For computer vision tasks, ImageNet pre-trained models can be easily downloaded for fine-tuning. However, no such pre-trained models are available that are equivariant to image transformations. In this work, we implement several equivariant versions of the residual network architecture and publicly release the weights after training on ImageNet. Additionally, we perform a comparison of enforced vs. learned equivariance in the largest data regime to date.
A Comparison of Equivariant Vision Models with ImageNet Pre-training
[ "David Klee", "Jung Yeon Park", "Robert Platt", "Robin Walters" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2sLBXyVsPE
@inproceedings{ mcneela2023almost, title={Almost Equivariance via Lie Algebra Convolutions}, author={Daniel McNeela}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=2sLBXyVsPE} }
Recently, the $\textit{equivariance}$ of models with respect to a group action has become an important topic of research in machine learning. Analysis of the built-in equivariance of existing neural network architectures, as well as the study of methods for building model architectures that explicitly ``bake in'' equivariance, have become significant research areas in their own right. However, imbuing an architecture with a specific group equivariance imposes a strong prior on the types of data transformations that the model expects to see. While strictly-equivariant models enforce symmetries, such as those due to rotations or translations, real-world data does not always follow such strict equivariances, be it due to noise in the data or underlying physical laws that encode only approximate or partial symmetries. In such cases, the prior of strict equivariance can actually prove too strong and cause models to underperform on real-world data. Therefore, in this work we study a closely related topic, that of $\textit{almost equivariance}$. We give a practical method for encoding almost equivariance in models by appealing to the Lie algebra of a Lie group and defining $\textit{Lie algebra convolutions}$. We demonstrate that Lie algebra convolutions offer several benefits over Lie group convolutions, including being computationally tractable and well-defined for non-compact groups. Finally, we demonstrate the validity of our approach by benchmarking against datasets in fully equivariant and almost equivariant settings.
Almost Equivariance via Lie Algebra Convolutions
[ "Daniel McNeela" ]
Workshop/NeurReps
2310.13164
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2EuaV9an6m
@inproceedings{ sonoda2023deep, title={Deep Ridgelet Transform: Voice with Koopman Operator Constructively Proves Universality of Formal Deep Networks}, author={Sho Sonoda and Yuka Hashimoto and Isao Ishikawa and Masahiro Ikeda}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=2EuaV9an6m} }
We identify hidden layers inside a deep neural network (DNN) with group actions on the data domain, and formulate a formal deep network as a dual voice transform with respect to the Koopman operator, a linear representation of the group action. Based on the group theoretic arguments, particularly by using Schur's lemma, we show a simple proof of the universality of DNNs.
Deep Ridgelet Transform: Voice with Koopman Operator Constructively Proves Universality of Formal Deep Networks
[ "Sho Sonoda", "Yuka Hashimoto", "Isao Ishikawa", "Masahiro Ikeda" ]
Workshop/NeurReps
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0vj5llDXVO
@inproceedings{ nguyen2023learning, title={Learning Symmetrization for Equivariance with Orbit Distance Minimization}, author={Dat Tien Nguyen and Jinwoo Kim and Hongseok Yang and Seunghoon Hong}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=0vj5llDXVO} }
We present a general framework for symmetrizing an arbitrary neural-network architecture and making it equivariant with respect to a given group. We build upon the proposals of Kim et al. (2023); Kaba et al. (2023) for symmetrization, and improve them by replacing their conversion of neural features into group representations, with an optimization whose loss intuitively measures the distance between group orbits. This change makes our approach applicable to a broader range of matrix groups, such as the Lorentz group O(1, 3), than these two proposals. We experimentally show our method’s competitiveness on the SO(2) image classification task, and also its increased generality on the task with O(1, 3). Our implementation will be made accessible at https://github.com/tiendatnguyen-vision/Orbit-symmetrize.
Learning Symmetrization for Equivariance with Orbit Distance Minimization
[ "Dat Tien Nguyen", "Jinwoo Kim", "Hongseok Yang", "Seunghoon Hong" ]
Workshop/NeurReps
2311.07143
[ "https://github.com/tiendatnguyen-vision/orbit-symmetrize" ]
https://huggingface.co/papers/2311.07143
0
1
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=0Atc0bcU6x
@inproceedings{ cesa2023algebraic, title={Algebraic Topological Networks via the Persistent Local Homology Sheaf}, author={Gabriele Cesa and Arash Behboodi}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=0Atc0bcU6x} }
In this work, we introduce a novel approach based on algebraic topology to enhance graph convolution and attention modules by incorporating local topological properties of the data. To do so, we consider the framework of sheaf neural networks, which has been previously leveraged to incorporate additional structure into graph neural networks’ features and construct more expressive, non-isotropic messages. Specifically, given an input simplicial complex (e.g. generated by the cliques of a graph or the neighbors in a point cloud), we construct its local homology sheaf, which assigns to each node the vector space of its local homology. The intermediate features of our networks live in these vector spaces and we leverage the associated sheaf Laplacian to construct more complex linear messages between them. Moreover, we extend this approach by considering the persistent version of local homology associated with a weighted simplicial complex (e.g., built from pairwise distances of nodes embeddings). This i) solves the problem of the lack of a natural choice of basis for the local homology vector spaces and ii) makes the sheaf itself differentiable, which enables our models to directly optimize the topology of their intermediate features.
Algebraic Topological Networks via the Persistent Local Homology Sheaf
[ "Gabriele Cesa", "Arash Behboodi" ]
Workshop/NeurReps
2311.10156
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=030cPt4d8i
@inproceedings{ marchetti2023neural, title={Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach}, author={Giovanni Luca Marchetti and Gabriele Cesa and Kumar Pratik and Arash Behboodi}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=030cPt4d8i} }
Lattice reduction is a combinatorial optimization problem aimed at finding the most orthogonal basis in a given lattice. In this work, we address lattice reduction via deep learning methods. We design a deep neural model outputting factorized unimodular matrices and train it in a self-supervised manner by penalizing non-orthogonal lattice bases. We incorporate the symmetries of lattice reduction into the model by making it invariant and equivariant with respect to appropriate continuous and discrete groups.
Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach
[ "Giovanni Luca Marchetti", "Gabriele Cesa", "Kumar Pratik", "Arash Behboodi" ]
Workshop/NeurReps
2311.08170
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=x4I3Ze3tP6
@inproceedings{ agrawal2024do, title={Do Language Models Know When They're Hallucinating References?}, author={Ayush Agrawal and Mirac Suzgun and Lester Mackey and Adam Kalai}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=x4I3Ze3tP6} }
State-of-the-art language models (LMs) are famous for "hallucinating'' references. These fabricated article and book titles lead to harms, obstacles to their use, and public backlash. While other types of LM hallucinations are also important, we propose hallucinated references as the "drosophila'' of research on hallucination in large language models (LLMs), as they are particularly easy to study. We show that simple search engine queries reliably identify such hallucinations, which facilitates evaluation. To begin to dissect the nature of hallucinated LM references, we attempt to classify them using black-box queries to the same LM, without consulting any external resources. Consistency checks done with _direct_ queries about whether the generated reference title is real (inspired by Kadavath et al. (2022), Lin et al. (2022) and Manakul (2023)) are compared to consistency checks with _indirect_ queries which ask for ancillary details such as the authors of the work. These consistency checks are found to be partially reliable indicators of whether or not the reference is a hallucination. In particular, we find that LMs often hallucinate _differing_ authors of hallucinated references when queried in independent sessions, while _consistently_ identify authors of real references. This suggests that the hallucination may be more a generation issue than inherent to current training techniques or representation.
Do Language Models Know When They're Hallucinating References?
[ "Ayush Agrawal", "Mirac Suzgun", "Lester Mackey", "Adam Kalai" ]
Workshop/ICBINB
2305.18248
[ "https://github.com/microsoft/hallucinated-references" ]
https://huggingface.co/papers/2305.18248
2
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=w7o14LCw9P
@inproceedings{ zheng2024why, title={Why Does Chat{GPT} Fall Short in Providing Truthful Answers?}, author={Shen Zheng and Jie Huang and Kevin Chang}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=w7o14LCw9P} }
Recent advancements in large language models, such as ChatGPT, have demonstrated significant potential to impact various aspects of human life. However, ChatGPT still faces challenges in providing reliable and accurate answers to user questions. To better understand the model’s particular weaknesses in providing truthful answers, we embark an in-depth exploration of open-domain question answering. Specifically, we undertake a detailed examination of ChatGPT’s failures, categorized into: comprehension, factuality, specificity, and inference. We further pinpoint factuality as the most contributing failure and identify two critical abilities associated with factuality: knowledge memorization and knowledge recall. Through experiments focusing on factuality, we propose several potential enhancement strategies. Our findings suggest that augmenting the model with granular external knowledge and cues for knowledge recall can enhance the model’s factuality in answering questions.
Why Does ChatGPT Fall Short in Providing Truthful Answers?
[ "Shen Zheng", "Jie Huang", "Kevin Chang" ]
Workshop/ICBINB
2304.10513
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vxfkVY2SLj
@inproceedings{ garg2024on, title={On the performance of Multimodal Language Models}, author={Utsav Garg and Erhan Bas}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=vxfkVY2SLj} }
Instruction-tuned large language models (LLMs) have demonstrated promising zero-shot generalization capabilities across various downstream tasks. Recent research has introduced multimodal capabilities to LLMs by integrating independently pretrained vision encoders through model grafting. These multimodal variants undergo instruction tuning, similar to LLMs, enabling effective zero-shot generalization for multimodal tasks. This study conducts a comparative analysis of different multimodal instruction tuning approaches and evaluates their performance across a range of tasks, including complex reasoning, conversation, image captioning, multiple-choice questions (MCQs), and binary classification. Through rigorous benchmarking and ablation experiments, we reveal key insights for guiding architectural choices when incorporating multimodal capabilities into LLMs. However, current approaches have limitations; they do not sufficiently address the need for a diverse multimodal instruction dataset, which is crucial for enhancing task generalization. Additionally, they overlook issues related to truthfulness and factuality when generating responses. These findings illuminate current methodological constraints in adapting language models for image comprehension and provide valuable guidance for researchers and practitioners seeking to harness multimodal versions of LLMs.
On the performance of Multimodal Language Models
[ "Utsav Garg", "Erhan Bas" ]
Workshop/ICBINB
2310.03211
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vAiEQBh2AW
@inproceedings{ schwinn2024adversarial, title={Adversarial Attacks and Defenses in Large Language Models: Old and New Threats}, author={Leo Schwinn and David Dobre and Stephan G{\"u}nnemann and Gauthier Gidel}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=vAiEQBh2AW} }
Over the past decade, there has been extensive research aimed at enhancing the robustness of neural networks, yet this problem remains vastly unsolved. Here, one major impediment has been the overestimation of the robustness of new defense approaches due to faulty defense evaluations. Flawed robustness evaluations necessitate rectifications in subsequent works, dangerously slowing down the research and providing a false sense of security. In this context, we will face substantial challenges associated with an impending adversarial arms race in natural language processing, specifically with closed-source Large Language Models (LLMs), such as ChatGPT, Google Bard, or Anthropic’s Claude. We provide a first set of prerequisites to improve the robustness assessment of new approaches and reduce the amount of faulty evaluations. Additionally, we identify embedding space attacks on LLMs as another viable threat model for the purposes of generating malicious content in open-sourced models. Finally, we demonstrate on a recently proposed defense that, without LLM-specific best practices in place, it is easy to overestimate the robustness of a new approach. Code is available at https://anonymous.4open.science/r/LLM_Embedding_Attack-6C3C
Adversarial Attacks and Defenses in Large Language Models: Old and New Threats
[ "Leo Schwinn", "David Dobre", "Stephan Günnemann", "Gauthier Gidel" ]
Workshop/ICBINB
2310.19737
[ "https://github.com/schwinnl/llm_embedding_attack" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tGM7rOmJzV
@inproceedings{ chen2024transformerbased, title={Transformer-Based Large Language Models Are Not General Learners: A Universal Circuit Perspective}, author={Yang Chen and Yitao Liang and Zhouchen Lin}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=tGM7rOmJzV} }
Large Language Models (LLMs) have demonstrated remarkable proficiency across diverse tasks, evoking perceptions of ``sparks of Artificial General Intelligence (AGI)". A key question naturally arises: *Can foundation models lead to AGI?* In this work, we try to answer this question partially by formally considering the capabilities of Transformer-based LLMs (T-LLMs) from the perspective of universal circuits. By investigating the expressive power of realistic T-LLMs as universal circuits, we show that a T-LLM of size $\operatorname{poly}(n)$ cannot perform all the basic operators of input length $O\left(\operatorname{poly}(\log n)\right)$. We also demonstrate that a constant-depth-$\operatorname{poly}(n)$-size log-precision T-LLM cannot faithfully execute prompts of complexity $n$. Our analysis provides a concrete theoretical foundation that T-LLMs can only be universal circuits for limited function classes. In other words, T-LLMs are not general learners. Furthermore, we exhibit that a constant-depth-$\operatorname{poly}(n)$-size log-precision T-LLM can memorize $O\left(\operatorname{poly}(n)\right)$ instances, which could partially explain the seeming inconsistency between LLMs' empirical successes and our negative results. To the best of our knowledge, our work takes the first step towards analyzing the limitations of T-LLMs as general learners within a rigorous theoretical framework. Our results promote the understanding of LLMs' capabilities and highlight the need for innovative architecture designs beyond Transformers to break current limitations.
Transformer-Based Large Language Models Are Not General Learners: A Universal Circuit Perspective
[ "Yang Chen", "Yitao Liang", "Zhouchen Lin" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tCZFmDyPFm
@inproceedings{ du2024a, title={A Study on Improving Reasoning in Language Models}, author={Yuqing Du and Alexander Havrilla and Sainbayar Sukhbaatar and Pieter Abbeel and Roberta Raileanu}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=tCZFmDyPFm} }
Accurately carrying out complex reasoning is a crucial component of deployable and reliable language models. While current language models can exhibit this capability with few-shot guidance, accurate reasoning is primarily restricted to larger model sizes. In this work, we explore methods for improving the reasoning capabilities of smaller language models which are more deployable than their larger counterparts. Specifically, we look at variations of supervised learning, online reinforcement learning with PPO, and distillation from larger models. Surprisingly, for reasoning tasks such as CommonsenseQA and GSM8K, we find that simple filtered supervised learning often outperforms reward-conditioned supervised learning, and that simpler iterative supervised learning performs on par with online reinforcement learning.
A Study on Improving Reasoning in Language Models
[ "Yuqing Du", "Alexander Havrilla", "Sainbayar Sukhbaatar", "Pieter Abbeel", "Roberta Raileanu" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qED8CGow7f
@inproceedings{ lee2024interactive, title={Interactive Model Correction with Natural Language}, author={Yoonho Lee and Michelle Lam and Helena Vasconcelos and Michael Bernstein and Chelsea Finn}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=qED8CGow7f} }
In supervised learning, models are trained to extract correlations from a static dataset. This often leads to models that rely on spurious correlations that fail to generalize to new data distributions, such as a bird classifier that relies on the background of an image. Preventing models from latching on to spurious correlations necessarily requires additional information beyond labeled data. Existing methods incorporate forms of additional instance-level supervision, such as labels for spurious features or additional labeled data from a balanced distribution. Such strategies can become prohibitively costly for large-scale datasets since they require additional annotation at a scale close to the original training data. We hypothesize that far less supervision suffices if we provide targeted feedback about the misconceptions of models trained on a given dataset. We introduce Clarify, a novel natural language interface and method for interactively correcting model misconceptions. Through Clarify, users need only provide a short text description to describe a model's consistent failure patterns, such as ``water background'' for a bird classifier. Then, in an entirely automated way, we use such descriptions to improve the training process by reweighting the training data or gathering additional targeted data. Our empirical results show that non-expert users can successfully describe model misconceptions via Clarify, improving worst-group accuracy by an average of 7.3% in two datasets with spurious correlations. Finally, we use Clarify to find and rectify 31 novel spurious correlations in ImageNet, improving minority-split accuracy from 21.1% to 28.7%.
Interactive Model Correction with Natural Language
[ "Yoonho Lee", "Michelle Lam", "Helena Vasconcelos", "Michael Bernstein", "Chelsea Finn" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pTEm4Gz7xL
@inproceedings{ tan2024structureaware, title={Structure-Aware Path Inference for Neural Finite State Transducers}, author={Weiting Tan and Chu-Cheng Lin and Jason Eisner}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=pTEm4Gz7xL} }
Finite-state transducers (FSTs) are a traditional approach to string-to-string mapping. Each FST path specifies a possible alignment of input and output strings. Compared to an unstructured seq2seq model, the FST includes an explicit latent alignment variable and equips it with domain-specific hard constraints and featurization, which can improve generalization from small training sets. Previous work has shown how to score the FST paths with a trainable neural architecture; this improves the model's expressive power by dropping the usual Markov assumption but makes inference more difficult for the same reason. In this paper, we focus on the resulting challenge of imputing the latent alignment path that explains a given pair of input and output strings (e.g. during training). We train three autoregressive approximate models for amortized inference of the path, which can then be used as proposal distributions for importance sampling. All three models perform lookahead. Our most sophisticated (and novel) model leverages the FST structure to consider the graph of future paths; unfortunately, we find that it loses out to the simpler approaches---except on an \emph{artificial} task that we concocted to confuse the simpler approaches.
Structure-Aware Path Inference for Neural Finite State Transducers
[ "Weiting Tan", "Chu-Cheng Lin", "Jason Eisner" ]
Workshop/ICBINB
2312.13614
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nZM9Wxu3vw
@inproceedings{ nayak2024analyzing, title={Analyzing the factual knowledge of parameter efficient instruction tuned mid-size Large Language Models}, author={Anmol Nayak and Hariprasad Timmapathini}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=nZM9Wxu3vw} }
Large Language Models (LLM) have significantly improved Natural Language Processing (NLP) by enhancing the accuracy, efficiency, and versatility of various NLP applications, from text generation to language translation, due to their ability to capture and leverage vast amounts of linguistic and factual knowledge. While LLM have pushed the boundaries, they typically need to be further instruction tuned to get improved performance on niche applications. In this paper, we focus on analyzing the factual knowledge of LLM keeping in mind the practical aspects of using LLM by: 1) training only a small injection model (having ≈ 0.05 % of the parameters of the base LLM) using the Low Rank Adapation (LoRA) parameter efficient technique, and 2) restricting our study to Llama-2-13b-chat and StableBeluga-13B, which are two mid-size LLM having 13 billion parameters and are based on the LLama 2 architecture. The injection model is instruction tuned for Knowledge Base (KB) construction on the LM-KBC 2023 challenge dataset, which contains subject-relation-object triplets of Wikipedia entities across 21 different factual relations. Our empirical analysis shows that even after instruction tuning, the LLM are: 1) deficient in foundational knowledge of many must-know areas like Geography, 2) unable to effectively use the context supplied in the prompt, and 3) fragile to subtle changes in prompt at inference. The source code for our experiments can be found at: https://github.com/Ffc1234/NIPS_ICBINB_ submission
Analyzing the factual knowledge of parameter efficient instruction tuned mid-size Large Language Models
[ "Anmol Nayak", "Hariprasad Timmapathini" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=loTgtzhoI2
@inproceedings{ georgiev2024beyond, title={Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs}, author={Dobrik Georgiev and Pietro Lio and Jakub Bachurski and Junhua Chen and Tunan Shi}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=loTgtzhoI2} }
Neural algorithmic reasoning excels in many graph algorithms, but assessment mainly focuses on the Erdős-Rényi (ER) graph family. This study delves into graph algorithmic models' generalization across diverse distributions. Testing a leading model exposes overreliance on ER graphs for generalization assessment. We further investigate two scenarios: generalisation to every target distribution and single target distributions. Our results suggest that achieving the former is not as trivial and achieving the latter can be aided selecting source distribution via novel Tree Mover's Distance interpretation.
Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs
[ "Dobrik Georgiev", "Pietro Lio", "Jakub Bachurski", "Junhua Chen", "Tunan Shi" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=lJWTOSxWgd
@inproceedings{ sharma2024exploring, title={Exploring and Improving the Spatial Reasoning Abilities of Large Language Models}, author={Manasi Sharma}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=lJWTOSxWgd} }
Large Language Models (LLMs) represent formidable tools for sequence modeling, boasting an innate capacity for general pattern recognition. Nevertheless, their broader spatial reasoning capabilities remain insufficiently explored. In this paper, we investigate the zero-shot performance of LLMs when confronted with a limited dataset comprising 3D robotic trajectory data and associated tasks, such as directional and motion labeling. Additionally, we introduce a novel prefix-based prompting mechanism, which yields a 30\% improvement on the 3D trajectory data and an increase of up to 16\% on SpartQA tasks when contrasted with the conventional vanilla prompt baseline (with gains over Chain-of-Thought prompting as well). The experimentation with 3D trajectory data offers an intriguing glimpse into the manner in which LLMs engage with numerical and spatial information, thus laying a solid foundation for the identification of target areas for future enhancements.
Exploring and Improving the Spatial Reasoning Abilities of Large Language Models
[ "Manasi Sharma" ]
Workshop/ICBINB
2312.01054
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=l188N6IZNY
@inproceedings{ heim2024towards, title={Towards Better Understanding of Domain Shift on Linear-Probed Visual Foundation Models}, author={Eric Heim}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=l188N6IZNY} }
Visual foundation models have recently emerged to offer similar promise as their language counterparts: The ability to produce representations of visual data that can be successfully used in a variety of tasks and contexts. One common way this is shown in research literature is through “domain generalization” experiments of linear models trained from representations produced by foundation models (i.e. linear probes). These experiments largely limit themselves to a small number of benchmark data sets and report accuracy as the single figure of merit, but give little insight beyond these numbers as to how different foundation models represent shifts. In this work we perform an empirical evaluation that expands the scope of previously reported results in order to give better understanding into how domain shifts are modeled. Namely, we investigate not just how models generalize across domains, but how models may enable domain transfer. Our evaluation spans a number of recent visual foundation models and benchmarks. We find that not only do linear probes fail to generalize on some shift benchmarks, but linear probes trained on some shifted data achieve low train accuracy, indicating that accurate transfer of linear probes is not possible with some visual foundation models.
Towards Better Understanding of Domain Shift on Linear-Probed Visual Foundation Models
[ "Eric Heim" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZzOinWt0sh
@inproceedings{ homan2024how, title={How Many Raters Do You Need? Power Analysis for Foundation Models}, author={Christopher M Homan and Shira Wein and Chris Welty and Lora Aroyo}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=ZzOinWt0sh} }
Due to their highly stochastic nature, as well as the complexity of the tasks they can perform, foundation models (large machine learning models) are poorly suited for conventional machine learning evaluation methods. This is because machine learning evaluation methods typically assume behavior to be deterministic and simple enough to be measured against gold standard data with unitary, authoritative, "correct" answers using straightforward metrics such as accuracy, precision, and recall. In this work, we propose an evaluation framework suitable for foundation models, which takes into account variance in the responses of both machine model and human rater. Utilizing recent advances in p-value estimation, we investigate the trade-offs between the number of items in a test set, the number of responses per item, the sampling method, and the metric, when measuring the comparative differences between two hypothetical foundation models at various degrees of similarity. When two models are very far apart in their predictive performance, fewer raters are needed to confidently compare them, as expected. However, as the models draw closer, we find that a larger number of annotators than are currently typical in annotation collection are needed to ensure the power analysis correctly reflects the difference in performance.
How Many Raters Do You Need? Power Analysis for Foundation Models
[ "Christopher M Homan", "Shira Wein", "Chris Welty", "Lora Aroyo" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=YlhKbQ0zF3
@inproceedings{ hsu2024can, title={Can Visual Scratchpads With Diagrammatic Abstractions Augment {LLM} Reasoning?}, author={Joy Hsu and Gabriel Poesia and Jiajun Wu and Noah Goodman}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=YlhKbQ0zF3} }
When humans reason about complex text-based questions, we leverage diagrammatic abstractions drawn on a visual scratchpad. In this paper, we introduce and explore the capabilities of Visual-Scratchpad, a method that augments a *large language foundation model* (LLM) with diagrammatic execution and readout. We enable the LLM to generate drawing commands and to readout abstractions from the resulting picture. The visual readout operation uses a *visual foundation model*, optionally finetuned with expert iteration. Here, we show that although Visual-Scratchpad outperforms an inference-only LLM, it surprisingly yields worse performance compared to a single finetuned LLM. Through experiments, we propose that this gap is due to the failure mode of vision foundation models in understanding abstractions in diagrams.
Can Visual Scratchpads With Diagrammatic Abstractions Augment LLM Reasoning?
[ "Joy Hsu", "Gabriel Poesia", "Jiajun Wu", "Noah Goodman" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WIzlQGKgVP
@inproceedings{ mejia2024exploring, title={Exploring {DINO}: Emergent Properties and Limitations for Synthetic Aperture Radar Imagery}, author={Joseph Alejandro Gallego Mejia and Anna Jungbluth and Laura Mart{\'\i}nez-Ferrer and Francisco Dorr and Matthew Allen and Freddie Kalaitzis and Ra{\'u}l Ramos-Poll{\'a}n}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=WIzlQGKgVP} }
Self-supervised learning (SSL) models have recently demonstrated remarkable performance across various tasks, including image segmentation. This study delves into the emergent characteristics of the Self-Distillation with No Labels (DINO) algorithm and its application to Synthetic Aperture Radar (SAR) imagery. We pre-train a vision transformer (ViT)-based DINO model using unlabeled SAR data, and later fine-tune the model to predict high resolution land cover maps. We rigorously evaluate the utility of attention maps generated by the ViT backbone, and compare them with the model's token embedding space. We observe a small improvement in model performance with pre-training compared to training from scratch, and discuss the limitations and opportunities of SSL for remote sensing and land cover segmentation. Beyond small performance increases, we show that ViT attention maps hold great intrinsic value for remote sensing, and could provide useful inputs to other algorithms. With this, our work lays the ground-work for bigger and better SSL models for Earth Observation.
Exploring DINO: Emergent Properties and Limitations for Synthetic Aperture Radar Imagery
[ "Joseph Alejandro Gallego Mejia", "Anna Jungbluth", "Laura Martínez-Ferrer", "Francisco Dorr", "Matthew Allen", "Freddie Kalaitzis", "Raúl Ramos-Pollán" ]
Workshop/ICBINB
2310.03513
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TGTZwVabpU
@inproceedings{ berglund2024the, title={The Reversal Curse: {LLM}s trained on ''A is B'' fail to learn ''B is A''}, author={Lukas Berglund and Meg Tong and Maximilian Kaufmann and Mikita Balesni and Asa Stickland and Tomasz Korbak and Owain Evans}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=TGTZwVabpU} }
We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form "*A is B*", it will not automatically generalize to the reverse direction "*B is A*". This is the **Reversal Curse**. For instance, if a model is trained on "Olaf Scholz was the ninth Chancellor of Germany", it will not automatically be able to answer the question, "Who was the ninth Chancellor of Germany?". Moreover, the likelihood of the correct answer ("Olaf Scholz") will not be higher than for a random name. Thus, models exhibit a basic failure of logical deduction and do not generalize a prevalent pattern in their training set (i.e. if "*A is B*" occurs, "*B is A*" is more likely to occur). We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of *Abyssal Melodies*" and showing that they fail to correctly answer "Who composed *Abyssal Melodies?*". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation. We also evaluate ChatGPT (GPT-3.5 and GPT-4) on questions about real-world celebrities, such as "Who is Tom Cruise's mother? [A: Mary Lee Pfeiffer]" and the reverse "Who is Mary Lee Pfeiffer's son?". GPT-4 correctly answers questions like the former 79% of the time, compared to 33% for the latter. This shows a failure of logical deduction that we hypothesize is caused by the Reversal Curse.
The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
[ "Lukas Berglund", "Meg Tong", "Maximilian Kaufmann", "Mikita Balesni", "Asa Stickland", "Tomasz Korbak", "Owain Evans" ]
Workshop/ICBINB
2309.12288
[ "https://github.com/lukasberglund/reversal_curse" ]
https://huggingface.co/papers/2309.12288
2
3
0
7
[]
[ "lberglund/reversal_curse" ]
[]
[]
[ "lberglund/reversal_curse" ]
[]
1
poster
null
https://openreview.net/forum?id=SGiQxu8zFL
@inproceedings{ kang2024deficiency, title={Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination}, author={Haoqiang Kang and Xiao-Yang Liu}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=SGiQxu8zFL} }
The hallucination issue is recognized as a fundamental deficiency of large language models (LLMs), especially when applied to fields such as finance, education, and law. Despite the growing concerns, there has been a lack of empirical investigation. In this paper, we provide an empirical examination of LLMs’ hallucination behaviors in financial tasks. First, we empirically investigate LLM model’s ability of explaining financial concepts and terminologies. Second, we assess LLM models’ capacity of querying historical stock prices. Third, to alleviate the hallucination issue, we evaluate the efficacy of four practical methods, including few-shot learning, Decoding by Contrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method for a function to generate a query command. Finally, our major finding is that off-the-shelf LLMs experience serious hallucination behaviors in financial tasks. Therefore, there is an urgent need to call for research efforts in mitigating LLMs’ hallucination.
Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination
[ "Haoqiang Kang", "Xiao-Yang Liu" ]
Workshop/ICBINB
2311.15548
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=R7OizRVhEu
@inproceedings{ rezk2024is, title={Is Scaling Learned Optimizers Worth It? Evaluating The Value of Ve{LO}'s 4000 {TPU} Months}, author={Fady Rezk and Antreas Antoniou and Henry Gouk and Timothy Hospedales}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=R7OizRVhEu} }
We analyze VeLO (versatile learned optimzer, the largest scale attempt to train a general purpose ``foundational'' optimizer to date. VeLO was trained on thousands of machine learning tasks over 4000 TPU months with the goal of producing an optimizer capable of generalizing to new problems while being hyper-parameter free, and outperforming industry standards such as Adam. We independently evaluate VeLO on the MLcommons optimizer benchmark suite. We find that contrary to initial claims: (1) VeLO has a critical hyper-parameter that needs problem-specific tuning, (2) VeLO does not necessarily outperform competitors in quality of solution found, and (3) VeLO is not faster than competing optimizers at reducing the training loss. These observations call into question VeLO's generality and the value of the investment in training it.
Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months
[ "Fady Rezk", "Antreas Antoniou", "Henry Gouk", "Timothy Hospedales" ]
Workshop/ICBINB
2310.18191
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PxiuaUKf8y
@inproceedings{ zhang2024pretrained, title={Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation}, author={Yuhui Zhang and Brandon McKinzie and Zhe Gan and Vaishaal Shankar and Alexander Toshev}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=PxiuaUKf8y} }
Recent advances in image tokenizers, such as VQ-VAE, have enabled text-to-image generation using auto-regressive methods, similar to language modeling. However, these methods have yet to leverage pre-trained language models, despite their adaptability to various downstream tasks. In this work, we explore this gap by adapting a pre-trained language model for auto-regressive text-to-image generation, and find that pre-trained language models offer limited help. We provide a two-fold explanation by analyzing tokens from each modality. First, we demonstrate that image tokens possess significantly different semantics compared to text tokens, rendering pre-trained language models no more effective in modeling them than randomly initialized ones. Second, the text tokens in the image-text datasets are too simple compared to normal language model pre-training data, which causes the catastrophic degradation of language models' capability.
Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation
[ "Yuhui Zhang", "Brandon McKinzie", "Zhe Gan", "Vaishaal Shankar", "Alexander Toshev" ]
Workshop/ICBINB
2311.16201
[ "" ]
https://huggingface.co/papers/2311.16201
1
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=OptKBWmreP
@inproceedings{ ren2024selfevaluation, title={Self-Evaluation Improves Selective Generation in Large Language Models}, author={Jie Ren and Yao Zhao and Tu Vu and Peter J Liu and Balaji Lakshminarayanan}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=OptKBWmreP} }
Safe deployment of large language models (LLMs) may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, recent research has demonstrated the limitations of using sequence-level probability estimates given by LLMs as reliable indicators of generation quality. Conversely, LLMs have demonstrated strong calibration at the token level, particularly when it comes to choosing correct answers in multiple-choice questions or evaluating true/false statements. In this work, we reformulate open-ended generation tasks into token-level prediction tasks, and leverage LLMs' superior calibration at the token level. We instruct an LLM to self-evaluate its answers, employing either a multi-way comparison or a point-wise evaluation approach, with the option to include an ``None of the above'' option to express the model's uncertainty explicitly. We benchmark a range of scoring methods based on self-evaluation and evaluate their performance in selective generation using TruthfulQA and TL;DR. Through extensive experiments with PaLM-2 and GPT-3, we demonstrate that self-evaluation based scores not only improve accuracy, but also correlate better with the overall quality of generated content.
Self-Evaluation Improves Selective Generation in Large Language Models
[ "Jie Ren", "Yao Zhao", "Tu Vu", "Peter J Liu", "Balaji Lakshminarayanan" ]
Workshop/ICBINB
2312.09300
[ "" ]
https://huggingface.co/papers/2312.09300
3
14
1
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=MXey5JIvz2
@inproceedings{ li2024sentimentpulse, title={SentimentPulse: Temporal-Aware Custom Language Models vs. {GPT}-3.5 for Consumer Sentiment}, author={Lixiang Li and Nagender Aneja and Alina Nesen and Bharat Bhargava}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=MXey5JIvz2} }
Large Language Models are trained on an extremely large corpus of text data to allow better generalization but this blessing can also become a curse and significantly limit their performance in a subset of tasks. In this work, we argue that LLMs are notably behind well-tailored and specifically designed models where the temporal aspect is important in making decisions and the answer depends on the timespan of available training data. We prove our point by comparing two major architectures: first, SentimentPulse, our proposed real-time consumer sentiment analysis approach that leverages custom language models and continual learning techniques, and second, GPT-3 which is tested on the same data. Unlike foundation models, which lack temporal context, our custom language model is pre-trained on time-stamped data, making it uniquely suited for real-time application. Additionally, we employ continual learning techniques to pre-train the model, and then classification and contextual multi-arm bandits to fine-tune the model, enhancing its adaptability and performance over time. We present a comparative analysis of the predictions accuracy of both architectures. To the best of our knowledge, this is the first application of custom language models for real-time consumer sentiment analysis beyond the scope of conventional surveys.
SentimentPulse: Temporal-Aware Custom Language Models vs. GPT-3.5 for Consumer Sentiment
[ "Lixiang Li", "Nagender Aneja", "Alina Nesen", "Bharat Bhargava" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=JZaTnRVuuN
@inproceedings{ wu2024compositional, title={Compositional Generalization in Vision-Language Models uses the Language Modality only}, author={Chenwei Wu and Patrick haffner and Li Li and Stefano Ermon and Rong Ge}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=JZaTnRVuuN} }
Compositionality is a common property in many modalities including text and images, but the compositional generalization of multi-modal models is not well-understood. In this paper, we identify two sources of visual-linguistic compositionality: linguistic priors and the interplay between images and texts. We show that current attempts to improve compositional generalization rely on linguistic priors rather than on information in the image, as the strength of the language model in detecting sentences that are syntactically and semantically likely overwhelms the vision part of the model. We find in particular that a benchmark for compositionality mostly favors pure language models. Finally, we propose a new benchmark for compositionality without such linguistic priors
The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models
[ "Chenwei Wu", "Patrick haffner", "Li Erran Li", "Stefano Ermon", "Rong Ge" ]
Workshop/ICBINB
2310.02777
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HnABvwYxc7
@inproceedings{ balles2024a, title={A Negative Result on Gradient Matching for Selective Backprop}, author={Lukas Balles and Cedric Archambeau and Giovanni Zappella}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=HnABvwYxc7} }
With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline.
A Negative Result on Gradient Matching for Selective Backprop
[ "Lukas Balles", "Cedric Archambeau", "Giovanni Zappella" ]
Workshop/ICBINB
2312.05021
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HBEegN2HcR
@inproceedings{ qamar2024can, title={Can Segment Anything Model Improve Semantic Segmentation?}, author={Maryam Qamar and Donghoon Kim and Muhammad Salman Ali and Chaoning Zhang and Sung-Ho Bae}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=HBEegN2HcR} }
Recently, Segment Anything Model (SAM) has gained considerable attention in the field of computer vision establishing itself as a pioneering foundation model for segmentation. Notably, SAM excels in generating high-quality segmentation masks, yet it lacks in semantic labels. In contrast, conventional semantic segmentation models generate rather accurate semantic labels but often produce suboptimal segmentation masks. The notion of leveraging SAM's superior mask quality to enhance the performance of conventional semantic segmentation models appears intuitive. However, our preliminary experiments reveal that the integration of SAM with these models does not result in any discernible improvement. Specifically, when assessing the performance of SAM's integration into two baseline semantic segmentation models, DeepLab and OneFormer, we find no significant enhancements in the mean Intersection over Union (mIoU) on the Pascal VOC and ade20k datasets. Consequently, we conclude that, as it stands, the highly acclaimed foundational model is not the preferred solution for the semantic segmentation task. Instead, a more cautious and thoughtful approach is imperative to unlock any potential benefits in this context.
Can Segment Anything Model Improve Semantic Segmentation?
[ "Maryam Qamar", "Donghoon Kim", "Muhammad Salman Ali", "Chaoning Zhang", "Sung-Ho Bae" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GYOXIRXI7W
@inproceedings{ petrov2024when, title={When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations}, author={Aleksandar Petrov and Philip Torr and Adel Bibi}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=GYOXIRXI7W} }
Context-based fine-tuning methods like prompting, in-context learning, soft prompting (prompt tuning) and prefix-tuning have gained popularity as they often match the performance of full fine-tuning with a fraction of the parameters. Still, there is little theoretical understanding of how these techniques influence the internal computation of the model and their expressiveness limitations. We show that despite the continuous embedding space being more expressive than the discrete token space, soft-prompting and prefix-tuning are strictly less expressive than full fine-tuning. Concretely, context-based fine-tuning cannot change the relative attention pattern over the content and can only bias the outputs of an attention layer in a fixed direction. While this means that context-based fine-tuning techniques can successfully elicit or combine skills already present in the pretrained model, they cannot learn tasks requiring new attention patterns.
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
[ "Aleksandar Petrov", "Philip Torr", "Adel Bibi" ]
Workshop/ICBINB
2310.19698
[ "https://github.com/aleksandarpetrov/prefix-tuning-theory" ]
https://huggingface.co/papers/2310.19698
1
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=FWTqwlHBC5
@inproceedings{ zhang2024a, title={A Study on the Calibration of In-context Learning}, author={Hanlin Zhang and YiFan Zhang and Yaodong Yu and Dhruv Madeka and Dean Foster and Eric P. Xing and Himabindu Lakkaraju and Sham M. Kakade}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=FWTqwlHBC5} }
Modern auto-regressive models are trained to minimize log loss by predicting the next token. As a result, they are expected to get calibrated answers when framing problems as next-token prediction tasks. We study this for in-context learning (ICL), a widely used way to adapt frozen large language models (LLMs) via crafting prompts and investigate the trade-offs between performance and calibration on a wide range of natural language understanding and reasoning tasks. We conduct extensive experiments to show that such trade-offs may get worse as we increase model size, incorporate more ICL examples, and fine-tune models using instruction or dialog tuning on carefully curated datasets. Furthermore, we find that common recalibration techniques that are widely effective such as temperature scaling may provide limited gains for calibration errors, suggesting that new methods may be required for settings where models are expected to be reliable.
A Study on the Calibration of In-context Learning
[ "Hanlin Zhang", "YiFan Zhang", "Yaodong Yu", "Dhruv Madeka", "Dean Foster", "Eric P. Xing", "Himabindu Lakkaraju", "Sham M. Kakade" ]
Workshop/ICBINB
2312.04021
[ "https://github.com/hlzhang109/icl-calibration" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=EooD8NMyQM
@inproceedings{ chen2024segment, title={Segment Anything Model ({SAM}) Enhances Pseudo-Labels for Weakly Supervised Semantic Segmentation}, author={Tianle Chen and Zheda Mai and Ruiwen Li and Wei-Lun Chao}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=EooD8NMyQM} }
Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation. Most existing methods rely on Class Activation Maps (CAM) to derive pixel-level pseudo-labels and use them to train a fully supervised semantic segmentation model. Although these pseudo-labels are class-aware, indicating the coarse regions for particular classes, they are not object-aware and fail to delineate accurate object boundaries. To address this, we introduce a simple yet effective method harnessing the Segment Anything Model (SAM), a class-agnostic foundation model capable of producing fine-grained instance masks of objects, parts, and subparts. We use CAM pseudo-labels as cues to select and combine SAM masks, resulting in high-quality pseudo-labels that are both class-aware and object-aware. Our approach is highly versatile and can be easily integrated into existing WSSS methods without any modification. Despite its simplicity, our approach shows consistent gain over the state-of-the-art WSSS methods on both PASCAL VOC and MS-COCO datasets.
Segment Anything Model (SAM) Enhances Pseudo-Labels for Weakly Supervised Semantic Segmentation
[ "Tianle Chen", "Zheda Mai", "Ruiwen Li", "Wei-Lun Chao" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=C0jJAbMMub
@inproceedings{ ocampo2024zeroshot, title={Zero-shot capabilities of visual language models with prompt engineering for images of animals}, author={Andrea Tejeda Ocampo and Eric Orenstein and Kakani Young}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=C0jJAbMMub} }
Visual Language Models have exhibited impressive performance on new tasks in a zero-shot setting. Language queries enable these large models to classify or detect objects even when presented with a novel concept in a shifted domain. We explore the limits of this capability by presenting Grounding DINO with images and concepts from field images of marine and terrestrial animals. By manipulating the language prompts, we found that the embedding space does not necessarily encode scientific taxonomic organism names, but still yields potentially useful localizations due to a strong sense of general objectness. Grounding DINO struggled with objects in a challenging underwater setting, but improved when fed expressive prompts that explicitly described morphology. These experiments suggest that large models still have room to grow in domain use-cases and illuminate avenues for strengthening their understanding of shape to further improve zero-shot performance. The code to reproduce these experiments is available at: https://github.com/bioinspirlab/deepsea-foundation-2023.
Zero-shot capabilities of visual language models with prompt engineering for images of animals
[ "Andrea Tejeda Ocampo", "Eric Orenstein", "Kakani Young" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=AUj2IKYdgi
@inproceedings{ panwar2024surprising, title={Surprising Deviations from Bayesian View in In-Context Learning}, author={Madhur Panwar and Kabir Ahuja and Navin Goyal}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=AUj2IKYdgi} }
In-context learning (ICL) is one of the surprising and useful features of large language models and subject of intense research. Recently, stylized meta-learning-like ICL setups have been devised that train transformers on sequences of input-output pairs $(x, f(x))$ using the language modeling loss. The function $f$ comes from a function class and generalization is checked by evaluation on sequences for unseen functions from the same class. One of the main discoveries in this line of research has been that for several function classes, such as linear regression, transformers successfully generalize to new functions in the class. However, the inductive biases of these models resulting in this behavior are not clearly understood. A model with unlimited training data and compute is a Bayesian predictor: it learns the pretraining distribution. In this paper we empirically examine how far this Bayesian perspective can help us understand ICL. To this end, we generalize the previous meta-ICL setup to hierarchical meta-ICL setup which involve unions of multiple task families. We instantiate this setup on multiple function families and find that transformers can do ICL in this setting as well. We make some surprising observations: Transformers can learn to generalize to new function classes that were not seen during pretraining. This requires pretraining on a very small number of function classes and involves deviating from the Bayesian predictor on the pretraining distribution. Further, we discover the phenomenon of 'forgetting', where over the course of pretraining under hierarchical meta-ICL setup, the transformer first generalizes to the full distribution of tasks and later forgets it while fitting the pretraining distribution.
Surprising Deviations from Bayesian View in In-Context Learning
[ "Madhur Panwar", "Kabir Ahuja", "Navin Goyal" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=8iuTHgTJEY
@inproceedings{ saravanan2024exploring, title={Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models}, author={Adhithya Prakash Saravanan and Rafal Kocielnik and Roy Jiang and Pengrui Han and Anima Anandkumar}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=8iuTHgTJEY} }
Text-to-image diffusion models have been adopted into key commercial workflows, such as art generation and image editing. Characterizing the implicit social biases they exhibit, such as gender and racial stereotypes, is a necessary first step in avoiding discriminatory outcomes. While existing studies on social bias focus on image generation, the biases exhibited in alternate applications of diffusion-based foundation models remain under-explored. We propose a framework that uses synthetic images to probe two applications of diffusion models, image editing and classification, for social bias. Using our framework, we uncover meaningful and significant inter-sectional social biases in Stable Diffusion, a state-of-the-art open-source text-to-image model. Our findings caution against the uninformed adoption of text-to-image foundation models for downstream tasks and services.
Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models
[ "Adhithya Prakash Saravanan", "Rafal Kocielnik", "Roy Jiang", "Pengrui Han", "Anima Anandkumar" ]
Workshop/ICBINB
2312.10065
[ "" ]
https://huggingface.co/papers/2312.10065
1
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=8Q84ensxZ1
@inproceedings{ alazraki2024how, title={How (not) to ensemble {LVLM}s for {VQA}}, author={Lisa Alazraki and Lluis Castrejon and Mostafa Dehghani and Fantine Huot and Jasper Uijlings and Thomas Mensink}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=8Q84ensxZ1} }
This paper studies ensembling in the era of Large Vision-Language Models (LVLMs). Ensembling is a classical method to combine different models to get increased performance. In the recent work on Encyclopedic-VQA the authors examine a wide variety of models to solve their task: from vanilla LVLMs, to models including the caption as extra context, to models augmented with Lens-based retrieval of Wikipedia pages. Intuitively these models are highly complementary, which should make them ideal for ensembling. Indeed, an oracle experiment shows potential gains from 48.8% accuracy (the best single model) all the way up to 67% (best possible ensemble). So it is a trivial exercise to create an ensemble with substantial real gains. Or is it?
How (not) to ensemble LVLMs for VQA
[ "Lisa Alazraki", "Lluis Castrejon", "Mostafa Dehghani", "Fantine Huot", "Jasper Uijlings", "Thomas Mensink" ]
Workshop/ICBINB
2310.06641
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=86SnqmSVv2
@inproceedings{ roberts2024a, title={A Natural Experiment on {LLM} Data Contamination in Code Generation}, author={Manley Roberts and Himanshu Thakur and Christine Herlihy and Colin White and Samuel Dooley}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=86SnqmSVv2} }
Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks. Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data. Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities. In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time. Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination. By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmarks in the age of LLMs that train on webscale data.
A Natural Experiment on LLM Data Contamination in Code Generation
[ "Manley Roberts", "Himanshu Thakur", "Christine Herlihy", "Colin White", "Samuel Dooley" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6Hv4aeezrS
@inproceedings{ chen2024can, title={Can {LLM}-Generated Misinformation Be Detected?}, author={Canyu Chen and Kai Shu}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=6Hv4aeezrS} }
The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures.
Can LLM-Generated Misinformation Be Detected?
[ "Canyu Chen", "Kai Shu" ]
Workshop/ICBINB
2309.13788
[ "https://github.com/llm-misinformation/llm-misinformation" ]
https://huggingface.co/papers/2309.13788
1
0
1
2
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=3darGLCe5t
@inproceedings{ lazovich2024filter, title={Filter bubbles and affective polarization in user-personalized large language model outputs}, author={Tomo Lazovich}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=3darGLCe5t} }
Echoing the history of search engines and social media content rankings, the advent of large language models (LLMs) has led to a push for increased personalization of model outputs to individual users. In the past, personalized recommendations and ranking systems have been linked to the development of filter bubbles (serving content that may confirm a user's existing biases) and affective polarization (strong negative sentiment towards those with differing views). In this work, we explore how prompting a leading large language model, ChatGPT-3.5, with a user's political affiliation prior to asking factual questions about public figures and organizations leads to differing results. We observe that left-leaning users tend to receive more positive statements about left-leaning political figures and media outlets, while right-leaning users see more positive statements about right-leaning entities. This pattern holds across presidential candidates, members of the U.S. Senate, and media organizations with ratings from AllSides. When qualitatively evaluating some of these outputs, there is evidence that particular facts are included or excluded based on the user's political affiliation. These results illustrate that personalizing LLMs based on user demographics carry the same risks of affective polarization and filter bubbles that have been seen in other personalized internet technologies. This ``failure mode" should be monitored closely as there are more attempts to monetize and personalize these models.
Filter bubbles and affective polarization in user-personalized large language model outputs
[ "Tomo Lazovich" ]
Workshop/ICBINB
2311.14677
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0RwbmLUU2o
@inproceedings{ mohta2024are, title={Are large language models good annotators?}, author={Jay Mohta and Kenan Ak and Yan Xu and Mingwei Shen}, booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models}, year={2024}, url={https://openreview.net/forum?id=0RwbmLUU2o} }
Numerous Natural Language Processing (NLP) tasks require precisely labeled data to ensure effective model training and achieve optimal performance. However, data annotation is marked by substantial costs and time requirements, especially when requiring specialized domain expertise or annotating a large number of samples. In this study, we investigate the feasibility of employing large language models (LLMs) as replacements for human annotators. We assess the zero-shot performance of various LLMs of different sizes to determine their viability as substitutes. Furthermore, recognizing that human annotators have access to diverse modalities, we introduce an image-based modality using the BLIP-2 architecture to evaluate LLM annotation performance. Among the tested LLMs, Vicuna-13b demonstrates competitive performance across diverse tasks. To assess the potential for LLMs to replace human annotators, we train a supervised model using labels generated by LLMs and compare its performance with models trained using human-generated labels. However, our findings reveal that models trained with human labels consistently outperform those trained with LLM-generated labels. We also highlights the challenges faced by LLMs in multilingual settings, where their performance significantly diminishes for tasks in languages other than English.
Are large language models good annotators?
[ "Jay Mohta", "Kenan Ak", "Yan Xu", "Mingwei Shen" ]
Workshop/ICBINB
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zwqlV7HoaT
@inproceedings{ martins2023sparse, title={Sparse Modern Hopfield Networks}, author={Andre Martins and Vlad Niculae and Daniel C McNamee}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=zwqlV7HoaT} }
Ramsauer et al. (2021) recently pointed out a connection between modern Hopfield networks and attention heads in transformers. In this paper, we extend their framework to a broader family of energy functions which can be written as a difference of a quadratic regularizer and a Fenchel-Young loss (Blondel et al., 2020), parametrized by a generalized negentropy function $\Omega$. By working with Tsallis negentropies, the resulting update rules become end-to-end differentiable sparse transformations, establishing a new link to adaptively sparse transformers (Correia et al., 2019) and allowing for exact convergence to single memory patterns. Experiments on simulated data show a higher tendency to avoid metastable states.
Sparse Modern Hopfield Networks
[ "Andre Martins", "Vlad Niculae", "Daniel C McNamee" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yAI92fMOkD
@inproceedings{ yampolskaya2023controlling, title={Controlling the bifurcations of attractors in modern Hopfield networks}, author={Maria Yampolskaya and Pankaj Mehta}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=yAI92fMOkD} }
Hopfield networks model complex systems with attractor states. However, there are many systems where attractors are not static. Attractors may undergo bifurcations under certain conditions; for example, cell fates have been described as attractor states that can be stabilized or destabilized by signalling. In the case of neural networks, retrieving a sequence of memories involves changing attractor states. We provide an extension to the modern Hopfield network that connects network dynamics to the landscape of any potential. With our model, it is possible to control the bifurcations of attractors and simulate the resulting neuron dynamics. By introducing controlled bifurcations, our formulation expands the application of Hopfield models to real-world contexts where attractors do not remain static.
Controlling the bifurcations of attractors in modern Hopfield networks
[ "Maria Yampolskaya", "Pankaj Mehta" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uMQiDWxCKd
@inproceedings{ sun2023associative, title={Associative Transformer Is A Sparse Representation Learner}, author={Yuwei Sun and Hideya Ochiai and Zhirong Wu and Stephen Lin and Ryota Kanai}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=uMQiDWxCKd} }
Emerging from the monolithic pairwise attention mechanism in conventional Transformer models, there is a growing interest in leveraging sparse interactions that align more closely with biological principles. Approaches including the Set Transformer and the Perceiver employ cross-attention consolidated with a latent space that forms an attention bottleneck with limited capacity. Building upon recent neuroscience studies of the Global Workspace Theory and associative memory, we propose the Associative Transformers (AiT). AiT induces low-rank explicit memory that serves as both priors to guide bottleneck attention in shared workspace and attractors within associative memory of a Hopfield network. We show that AiT is a sparse representation learner, learning distinct priors through the bottlenecks that are complexity-invariant to input quantities and dimensions. AiT demonstrates its superiority over methods such as the Set Transformer, Vision Transformer, and Coordination in various vision tasks.
Associative Transformer Is A Sparse Representation Learner
[ "Yuwei Sun", "Hideya Ochiai", "Zhirong Wu", "Stephen Lin", "Ryota Kanai" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=sYAm62gWbo
@inproceedings{ chaudhry2023long, title={Long Sequence Hopfield Memory}, author={Hamza Tahir Chaudhry and Jacob A Zavatone-Veth and Dmitry Krotov and Cengiz Pehlevan}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=sYAm62gWbo} }
Sequence memory is an essential attribute of natural and artificial intelligence that enables agents to encode, store, and retrieve complex sequences of stimuli and actions. Computational models of sequence memory have been proposed where recurrent Hopfield-like neural networks are trained with temporally asymmetric Hebbian rules. However, these networks suffer from limited sequence capacity (maximal length of the stored sequence) due to interference between the memories. Inspired by recent work on Dense Associative Memories, we expand the sequence capacity of these models by introducing a nonlinear interaction term, enhancing separation between the patterns. We derive novel scaling laws for sequence capacity with respect to network size, significantly outperforming existing scaling laws for models based on traditional Hopfield networks, verify these theoretical results with numerical simulation, and demonstrate their usefulness in overlapping patterns. Finally, we describe a biologically-plausible implementation, with connections to motor neuroscience.
Long Sequence Hopfield Memory
[ "Hamza Tahir Chaudhry", "Jacob A Zavatone-Veth", "Dmitry Krotov", "Cengiz Pehlevan" ]
Workshop/AMHN
2306.04532
[ "https://github.com/pehlevan-group/longsequencehopfieldmemory" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qvD4lx2iV0
@inproceedings{ meersch2023training, title={Training a Hopfield Variational Autoencoder with Equilibrium Propagation}, author={Tom Van Der Meersch and Johannes Deleu and Thomas Demeester}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=qvD4lx2iV0} }
On dedicated analog hardware, equilibrium propagation is an energy-efficient alternative to backpropagation. In spite of its theoretical guarantees, its application in the AI domain remains limited to the discriminative setting. Meanwhile, despite its high computational demands, generative AI is on the rise. In this paper, we demonstrate the application of Equilibrium Propagation in training a variational autoencoder (VAE) for generative modeling. Leveraging the symmetric nature of Hopfield networks, we propose using a single model to serve as both the encoder and decoder which could effectively halve the required chip size for VAE implementations, paving the way for more efficient analog hardware configurations.
Training a Hopfield Variational Autoencoder with Equilibrium Propagation
[ "Tom Van Der Meersch", "Johannes Deleu", "Thomas Demeester" ]
Workshop/AMHN
2311.15047
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pgPAsSv5ga
@inproceedings{ zhao2023incontext, title={In-Context Exemplars as Clues to Retrieving from Large Associative Memory}, author={Jiachen ZHAO}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=pgPAsSv5ga} }
Recently, large language models (LLMs) have made remarkable progress in natural language processing (NLP). The most representative ability of LLMs is in-context learning (ICL), which enables LLMs to learn patterns from in-context exemplars without training. However, there remains limited intuition for how in-context learning works. In this paper, we present a novel perspective on prompting LLMs by conceptualizing it as contextual retrieval from a model of associative memory, which can be biologically plausible. We establish a theoretical interpretation of ICL based on an extension of the framework of Hopfield Networks. Based on our theory, we further analyze how in-context exemplars influence the performance of ICL. Our study sheds new light on the mechanism of ICL by connecting it to memory retrieval, with potential implications for advancing the understanding of LLMs.
In-Context Exemplars as Clues to Retrieving from Large Associative Memory
[ "Jiachen ZHAO" ]
Workshop/AMHN
2311.03498
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mvSmkxqdxp
@inproceedings{ haputhanthri2023enhanced, title={Enhanced cue associated memory in temporally consistent recurrent neural networks}, author={Udith Haputhanthri and Liam Storan and Adam Shai and Surya Ganguli and Mark Schnitzer and Hidenori Tanaka and Fatih Dinc}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=mvSmkxqdxp} }
Recurrent connections are instrumental in creating memories and performing time-delayed computations. During their training, networks often explore distinct topological regions across the parameter space, each with unique attractor structures that serve specific computational purposes. However, the mechanisms that facilitate these topological transitions, so called bifurcations, toward an optimal parameter space configuration remain poorly understood. In this workshop paper, we investigated the learning process of recurrent neural networks in memory-assisted computation and developed a regularization strategy to encourage bifurcations that enhance memory formation capacity. To begin, we examined a delayed addition task that required the network to retain cue-associated memories for an extended duration. We observed two distinct phases during the learning of recurrent neural networks, separated by a bifurcation. In the initial \textit{search phase}, both train and test loss values remained stable as the network searched for beneficial bifurcations leading to optimal parameter configurations. In the subsequent \textit{rapid comprehension phase}, the loss values rapidly decreased, and the network quickly learned the task while preserving its topology but updating its geometry. During our analysis, we observed that the gradient direction, \textit{i.e.}, learning signal, was aligned with the optimal descent direction in the second but not the first phase. To aid learning in the search phase, we developed a temporal consistency regularization that incentivized a subset of neurons to have slow time dynamics, which subsequently decreased the duration of the search. Next, we tested the stability of the learned attractors with and without the temporal consistency regularization, via noise injection experiments, where we uncovered a more robust attractor subspace formation in the former. Finally, we enforced temporal consistency in a randomly initialized chaotic recurrent neural network to obtain several cue-associated fixed points in an unsupervised, online, and biologically plausible manner. Our results provide a deeper understanding of the role of bifurcations in enhancing associative memory by driving networks toward the desired attractor formation.
Enhanced cue associated memory in temporally consistent recurrent neural networks
[ "Udith Haputhanthri", "Liam Storan", "Adam Shai", "Surya Ganguli", "Mark Schnitzer", "Hidenori Tanaka", "Fatih Dinc" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=lrfoJwxRWq
@inproceedings{ lu2023learning, title={Learning Sequence Attractors in Recurrent Networks with Hidden Neurons}, author={Yao Lu and Si Wu}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=lrfoJwxRWq} }
The brain is targeted for processing temporal sequence information. It remains largely unclear how the brain learns to store and retrieve sequence memories. Here, we study how networks of Hopfield type learn sequence attractors to store predefined pattern sequences and retrieve them robustly. We show that to store arbitrary pattern sequences, it is necessary for the network to include hidden neurons even though their role in displaying sequence memories is indirect. We develop a local learning algorithm to learn sequence attractors in the networks with hidden neurons. The algorithm is proven to converge and lead to sequence attractors. We demonstrate that our model can store and retrieve sequences robustly on synthetic and real-world datasets. We hope that this study provides new insights in understanding sequence memory and temporal information processing in the brain.
Learning Sequence Attractors in Hopfield Networks with Hidden Neurons
[ "Yao Lu", "Si Wu" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=lO61aZlteS
@inproceedings{ schaeffer2023associative, title={Associative Memory Under the Probabilistic Lens: Improved Transformers \& Dynamic Memory Creation}, author={Rylan Schaeffer and Mikail Khona and Nika Zahedi and Ila R Fiete and Andrey Gromov and Sanmi Koyejo}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=lO61aZlteS} }
Clustering is a fundamental unsupervised learning problem, and recent work showed modern continuous associative memory (AM) networks can learn to cluster data via a novel unconstrained continuous relaxation of the discrete clustering optimization problem. In this work, we demonstrate that the energy function of that AM network can be viewed as the scaled negative log likelihood of a Gaussian mixture model, and that the dynamics of the AM network can be viewed as performing expectation maximization via gradient ascent rather than via closed-form coordinate ascent. Based on this insight, we show that a widespread practical implementation choice - self-attention with pre-layer normalization - approximates clustering on the hypersphere with inhomogeneous von Mises-Fisher likelihoods, suggesting a future experiment to improve transformers. We additionally leverage this connection to propose a novel AM network with the ability to create new memories during learning, as necessitated by the data, by drawing on tools from combinatorial stochastic processes and Bayesian nonparametrics.
Associative Memory Under the Probabilistic Lens: Improved Transformers Dynamic Memory Creation
[ "Rylan Schaeffer", "Mikail Khona", "Nika Zahedi", "Ila R Fiete", "Andrey Gromov", "Sanmi Koyejo" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=hkV9CvCOjH
@inproceedings{ ambrogioni2023in, title={In search of dispersed memories: Generative diffusion models are associative memory networks}, author={Luca Ambrogioni}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=hkV9CvCOjH} }
Hopfield networks are widely used in neuroscience as simplified theoretical models of biological associative memory. The original Hopfield networks store memories by encoding patterns of binary associations, which result in a synaptic learning mechanism known as Hebbian learning rule. Modern Hopfield networks can achieve exponential capacity scaling by using highly non-linear energy functions. However, the energy function of these newer models cannot be straightforwardly compressed into binary synaptic couplings and it does not directly provide new synaptic learning rules. In this work we show that generative diffusion models can be interpreted as energy-based models and that, when trained on discrete patterns, their energy function is equivalent to that of modern Hopfield networks. This equivalence allows us to interpret the supervised training of diffusion models as a synaptic learning process that encodes the associative dynamics of a modern Hopfield network in the weight structure of a deep neural network. Accordingly, in our experiments we show that the storage capacity of a continuous modern Hopfield network is identical to the capacity of a diffusion model. Our results establish a strong link between generative modeling and the theoretical neuroscience of memory, which provide a powerful computational foundation for the reconstructive theory of memory, where creative generation and memory recall can be seen as parts of a unified continuum.
In search of dispersed memories: Generative diffusion models are associative memory networks
[ "Luca Ambrogioni" ]
Workshop/AMHN
2309.17290
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=hXef89mdlH
@inproceedings{ tyulmankov2023memorization, title={Memorization and consolidation in associative memory networks}, author={Danil Tyulmankov and Kim Stachenfeld and Dmitry Krotov and Larry Abbott}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=hXef89mdlH} }
Humans, animals, and machines can store and retrieve long-term memories of individual items, while at the same time consolidating and learning general representations of categories that discard the individual examples from which the representations were constructed. Classical neural networks model only one or the other of these two regimes. In this work, we propose a biologically motivated model that can not only consolidate representations of common items but also memorize exceptional ones. Critically, we consider the unsupervised learning regime where exceptional items are not labeled as such a priori, so the signal to either memorize or consolidate items must be generated by the network itself. We propose a number of metrics for this control signal and compare them for two different algorithms inspired by traditional imbalanced data learning approaches -- loss reweighting and importance sampling. Overall, our model serves not only as a framework for concurrent memorization and consolidation processes in biological systems, but also as a simple illustration of related phenomena in large-scale machine learning models, as well as a potential method for debiasing artificial intelligence algorithms.
Memorization and consolidation in associative memory networks
[ "Danil Tyulmankov", "Kim Stachenfeld", "Dmitry Krotov", "Larry Abbott" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=gzFuhvumGn
@inproceedings{ li2023modeling, title={Modeling Recognition Memory with Predictive Coding and Hopfield Networks}, author={Tianjin Li and Mufeng Tang and Rafal Bogacz}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=gzFuhvumGn} }
Associative memory (AM) and recognition memory (RM) are fundamental in human and machine cognition. RM refers to an ability to recognize if the stimulus has been seen before, or is novel. Neuroscience studies reveal that regions such as the hippocampus, known for AM, are also involved in RM. Inspired by repetition suppression in the brain, this work presents an energy-based approach to RM, where a model learns by adjusting an energy function. We employed this energy-based approach to Hopfield Networks (HNs) and Predictive Coding Networks (PCNs). Our simulations indicate that PCN outperforms HNs in RM tasks, especially with correlated patterns. In this work, we also unify the theoretical understanding of HN and PCN in RM, revealing that both perform metric learning. This theory is crucial in explaining PCN's superior performance in handling correlated data as it reveals that PCNs employ a statistical whitening step in its metric learning, which refines the distinction between familiar and novel stimuli. Overall, the superior performance of PCN, as well as the unique error neurons in its circuit implementation matching repetition suppression, provide a plausible account of how the brain performs RM, within the network architecture known to also support AM.
Modeling Recognition Memory with Predictive Coding and Hopfield Networks
[ "Tianjin Li", "Mufeng Tang", "Rafal Bogacz" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=guPW3ACk2L
@inproceedings{ cabannes2023associative, title={Associative Memories with Heavy-Tailed Data}, author={Vivien Cabannes and Elvis Dohmatob and Alberto Bietti}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=guPW3ACk2L} }
Learning arguably involves the discovery and memorization of abstract rules. But how associative memories appear in transformer architectures optimized with gradient descent algorithms? We derive precise scaling laws for a simple input-output associative memory model with respect to parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms. We provide extensive numerical experiments to validate and interpret theoretical results, including fine-grained visualizations of the stored memory associations.
Associative Memories with Heavy-Tailed Data
[ "Vivien Cabannes", "Elvis Dohmatob", "Alberto Bietti" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster