bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=HYa3eu8scG | @inproceedings{
chen2024training,
title={Training for Stable Explanation for Free},
author={Chao Chen and Chenghua Guo and Rufeng Chen and Guixiang Ma and Ming Zeng and Xiangwen Liao and Xi Zhang and Sihong Xie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HYa3eu8scG}
} | To foster trust in machine learning models, explanations must be faithful and stable for consistent insights. Existing relevant works rely on the $\ell_p$ distance for stability assessment, which diverges from human perception. Besides, existing adversarial training (AT) associated with intensive computations may lead to an arms race. To address these challenges, we introduce a novel metric to assess the stability of top-$k$ salient features. We introduce R2ET which trains for stable explanation by efficient and effective regularizer,
and analyze R2ET by multi-objective optimization to prove numerical and statistical stability of explanations. Moreover, theoretical connections between R2ET and certified robustness justify R2ET's stability in all attacks. Extensive experiments across various data modalities and model architectures show that R2ET achieves superior stability against stealthy attacks, and generalizes effectively across different explanation methods. The code can be found at https://github.com/ccha005/R2ET. | Training for Stable Explanation for Free | [
"Chao Chen",
"Chenghua Guo",
"Rufeng Chen",
"Guixiang Ma",
"Ming Zeng",
"Xiangwen Liao",
"Xi Zhang",
"Sihong Xie"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HXdAfK488A | @inproceedings{
piriyakulkij2024doing,
title={Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning},
author={Wasu Top Piriyakulkij and Cassidy Langenfeld and Tuan Anh Le and Kevin Ellis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HXdAfK488A}
} | We give a model of how to infer natural language rules by doing experiments. The
model integrates Large Language Models (LLMs) with Monte Carlo algorithms for
probabilistic inference, interleaving online belief updates with experiment design
under information-theoretic criteria. We conduct a human-model comparison on a
Zendo-style task, finding that a critical ingredient for modeling the human data is to
assume that humans also consider fuzzy, probabilistic rules, in addition to assuming
that humans perform approximately-Bayesian belief updates. We also compare
with recent algorithms for using LLMs to generate and revise hypotheses, finding
that our online inference method yields higher accuracy at recovering the true
underlying rule, and provides better support for designing optimal experiments. | Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning | [
"Wasu Top Piriyakulkij",
"Cassidy Langenfeld",
"Tuan Anh Le",
"Kevin Ellis"
] | NeurIPS.cc/2024/Conference | 2402.06025 | [
"https://github.com/topwasu/doing-experiments-and-revising-rules"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HW9S9vY5gZ | @inproceedings{
legacci2024noregret,
title={No-regret Learning in Harmonic Games: Extrapolation in the Face of Conflicting Interests},
author={Davide Legacci and Panayotis Mertikopoulos and Christos Papadimitriou and Georgios Piliouras and Bary Pradelski},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HW9S9vY5gZ}
} | The long-run behavior of multi-agent online learning -- and, in particular, no-regret learning -- is relatively well-understood in potential games, where players have common interests. By contrast, in general harmonic games -- the strategic complement of potential games, where players have competing interests -- very little is known outside the narrow subclass of $2$-player zero-sum games with a fully-mixed equilibrium. Our paper seeks to partially fill this gap by focusing on the full class of (generalized) harmonic games and examining the convergence properties of "follow-the-regularized-leader" (FTRL), the most widely studied class of no-regret learning schemes. As a first result, we show that the continuous-time dynamics of FTRL are Poincaré recurrent, i.e., they return arbitrarily close to their starting point infinitely often, and hence fail to converge. In discrete time, the standard, "vanilla" implementation of FTRL may lead to even worse outcomes, eventually trapping the players in a perpetual cycle of best-responses. However, if FTRL is augmented with a suitable extrapolation step -- which includes as special cases the optimistic and mirror-prox variants of FTRL -- we show that learning converges to a Nash equilibrium from any initial condition, and all players are guaranteed at most $\mathcal{O}(1)$ regret. These results provide an in-depth understanding of no-regret learning in harmonic games, nesting prior work on $2$-player zero-sum games, and showing at a high level that potential and harmonic games are complementary not only from the strategic but also from the dynamic viewpoint. | No-regret Learning in Harmonic Games: Extrapolation in the Face of Conflicting Interests | [
"Davide Legacci",
"Panayotis Mertikopoulos",
"Christos Papadimitriou",
"Georgios Piliouras",
"Bary Pradelski"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=HUxtJcQpDS | @inproceedings{
hemker2024healnet,
title={{HEALN}et: Multimodal Fusion for Heterogeneous Biomedical Data},
author={Konstantin Hemker and Nikola Simidjievski and Mateja Jamnik},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HUxtJcQpDS}
} | Technological advances in medical data collection, such as high-throughput genomic sequencing and digital high-resolution histopathology, have contributed to the rising requirement for multimodal biomedical modelling, specifically for image, tabular and graph data. Most multimodal deep learning approaches use modality-specific architectures that are often trained separately and cannot capture the crucial cross-modal information that motivates the integration of different data sources. This paper presents the **H**ybrid **E**arly-fusion **A**ttention **L**earning **Net**work (HEALNet) – a flexible multimodal fusion architecture, which: a) preserves modality-specific structural information, b) captures the cross-modal interactions and structural information in a shared latent space, c) can effectively handle missing modalities during training and inference, and d) enables intuitive model inspection by learning on the raw data input instead of opaque embeddings. We conduct multimodal survival analysis on Whole Slide Images and Multi-omic data on four cancer datasets from The Cancer Genome Atlas (TCGA). HEALNet achieves state-of-the-art performance compared to other end-to-end trained fusion models, substantially improving over unimodal and multimodal baselines whilst being robust in scenarios with missing modalities. The code is available at https://github.com/konst-int-i/healnet. | HEALNet: Multimodal Fusion for Heterogeneous Biomedical Data | [
"Konstantin Hemker",
"Nikola Simidjievski",
"Mateja Jamnik"
] | NeurIPS.cc/2024/Conference | 2311.09115 | [
"https://github.com/konst-int-i/healnet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HU2uyDjAcy | @inproceedings{
fiegel2024local,
title={Local and Adaptive Mirror Descents in Extensive-Form Games},
author={C{\^o}me Fiegel and Pierre Menard and Tadashi Kozuno and Remi Munos and Vianney Perchet and Michal Valko},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HU2uyDjAcy}
} | We study how to learn $\epsilon$-optimal strategies in zero-sum imperfect information games (IIG) with *trajectory feedback*. In this setting, players update their policies sequentially, based on their observations over a fixed number of episodes denoted by $T$. Most existing procedures suffer from high variance due to the use of importance sampling over sequences of actions. To reduce this variance, we consider a *fixed sampling* approach, where players still update their policies over time, but with observations obtained through a given fixed sampling policy. Our approach is based on an adaptive Online Mirror Descent (OMD) algorithm that applies OMD locally to each information set, using individually decreasing learning rates and a *regularized loss*. We show that this approach guarantees a convergence rate of $\tilde{\mathcal{O}}(T^{-1/2})$ with high probability and has a near-optimal dependence on the game parameters when applied with the best theoretical choices of learning rates and sampling policies. To achieve these results, we generalize the notion of OMD stabilization, allowing for time-varying regularization with convex increments. | Local and Adaptive Mirror Descents in Extensive-Form Games | [
"Côme Fiegel",
"Pierre Menard",
"Tadashi Kozuno",
"Remi Munos",
"Vianney Perchet",
"Michal Valko"
] | NeurIPS.cc/2024/Conference | 2309.00656 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HTLJptF7qM | @inproceedings{
nguyen2024noisy,
title={Noisy Label Learning with Instance-Dependent Outliers: Identifiability via Crowd Wisdom},
author={Tri Nguyen and Shahana Ibrahim and Xiao Fu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HTLJptF7qM}
} | The generation of label noise is often modeled as a process involving a probability transition matrix (also interpreted as the _annotator confusion matrix_) imposed onto the label distribution. Under this model, learning the ``ground-truth classifier''---i.e., the classifier that can be learned if no noise was present---and the confusion matrix boils down to a model identification problem. Prior works along this line demonstrated appealing empirical performance, yet identifiability of the model was mostly established by assuming an instance-invariant confusion matrix. Having an (occasionally) instance-dependent confusion matrix across data samples is apparently more realistic, but inevitably introduces outliers to the model. Our interest lies in confusion matrix-based noisy label learning with such outliers taken into consideration. We begin with pointing out that under the model of interest, using labels produced by only one annotator is fundamentally insufficient to detect the outliers or identify the ground-truth classifier. Then, we prove that by employing a crowdsourcing strategy involving multiple annotators, a carefully designed loss function can establish the desired model identifiability under reasonable conditions. Our development builds upon a link between the noisy label model and a column-corrupted matrix factorization mode---based on which we show that crowdsourced annotations distinguish nominal data and instance-dependent outliers using a low-dimensional subspace. Experiments show that our learning scheme substantially improves outlier detection and the classifier's testing accuracy. | Noisy Label Learning with Instance-Dependent Outliers: Identifiability via Crowd Wisdom | [
"Tri Nguyen",
"Shahana Ibrahim",
"Xiao Fu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=HShs7q1Njh | @inproceedings{
requeima2024llm,
title={{LLM} Processes: Numerical Predictive Distributions Conditioned on Natural Language},
author={James Requeima and John F Bronskill and Dami Choi and Richard E. Turner and David Duvenaud},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HShs7q1Njh}
} | Machine learning practitioners often face significant challenges in formally integrating their prior knowledge and beliefs into predictive models, limiting the potential for nuanced and context-aware analyses. Moreover, the expertise needed to integrate this prior knowledge into probabilistic modeling typically limits the application of these models to specialists. Our goal is to build a regression model that can process numerical data and make probabilistic predictions at arbitrary locations, guided by natural language text which describes a user's prior knowledge. Large Language Models (LLMs) provide a useful starting point for designing such a tool since they 1) provide an interface where users can incorporate expert insights in natural language and 2) provide an opportunity for leveraging latent problem-relevant knowledge encoded in LLMs that users may not have themselves. We start by exploring strategies for eliciting explicit, coherent numerical predictive distributions from LLMs. We examine these joint predictive distributions, which we call LLM Processes, over arbitrarily-many quantities in settings such as forecasting, multi-dimensional regression, black-box optimization, and image modeling. We investigate the practical details of prompting to elicit coherent predictive distributions, and demonstrate their effectiveness at regression. Finally, we demonstrate the ability to usefully incorporate text into numerical predictions, improving predictive performance and giving quantitative structure that reflects qualitative descriptions. This lets us begin to explore the rich, grounded hypothesis space that LLMs implicitly encode. | LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language | [
"James Requeima",
"John F Bronskill",
"Dami Choi",
"Richard E. Turner",
"David Duvenaud"
] | NeurIPS.cc/2024/Conference | 2405.12856 | [
"https://github.com/requeima/llm_processes"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HSRs6yyuUK | @inproceedings{
he2024preventing,
title={Preventing Model Collapse in Deep Canonical Correlation Analysis by Noise Regularization},
author={Junlin He and Jinxiao Du and Susu Xu and Wei Ma},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HSRs6yyuUK}
} | Multi-View Representation Learning (MVRL) aims to learn a unified representation of an object from multi-view data.
Deep Canonical Correlation Analysis (DCCA) and its variants share simple formulations and demonstrate state-of-the-art performance. However, with extensive experiments, we observe the issue of model collapse, i.e., the performance of DCCA-based methods will drop drastically when training proceeds. The model collapse issue could significantly hinder the wide adoption of DCCA-based methods because it is challenging to decide when to early stop. To this end, we develop NR-DCCA, which is equipped with a novel noise regularization approach to prevent model collapse. Theoretical analysis shows that the Correlation Invariant Property is the key to preventing model collapse, and our noise regularization forces the neural network to possess such a property. A framework to construct synthetic data with different common and complementary information is also developed to compare MVRL methods comprehensively. The developed NR-DCCA outperforms baselines stably and consistently in both synthetic and real-world datasets, and the proposed noise regularization approach can also be generalized to other DCCA-based methods such as DGCCA. | Preventing Model Collapse in Deep Canonical Correlation Analysis by Noise Regularization | [
"Junlin He",
"Jinxiao Du",
"Susu Xu",
"Wei Ma"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HSJOt2hyDf | @inproceedings{
bose2024initializing,
title={Initializing Services in Interactive {ML} Systems for Diverse Users},
author={Avinandan Bose and Mihaela Curmei and Daniel L. Jiang and Jamie Heather Morgenstern and Sarah Dean and Lillian J. Ratliff and Maryam Fazel},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HSJOt2hyDf}
} | This paper investigates ML systems serving a group of users, with multiple models/services, each aimed at specializing to a sub-group of users. We consider settings where upon deploying a set of services, users choose the one minimizing their personal losses and the learner iteratively learns by interacting with diverse users. Prior research shows that the outcomes of learning dynamics, which comprise both the services' adjustments and users' service selections, hinge significantly on the initial conditions. However, finding good initial conditions faces two main challenges: (i) \emph{Bandit feedback:} Typically, data on user preferences are not available before deploying services
and observing user behavior; (ii) \emph{Suboptimal local solutions:} The total loss landscape (i.e., the sum of loss functions across all users and services) is not convex and gradient-based algorithms can get stuck in poor local minima.
We address these challenges with a randomized algorithm to adaptively select a minimal set of users for data collection in order to initialize a set of services. Under mild assumptions on the loss functions, we prove that our initialization leads to a total loss within a factor of the \textit{globally optimal total loss,with complete user preference data}, and this factor scales logarithmically in the number of services. This result is a generalization of the well-known $k$-means++ guarantee to a broad problem class which is also of independent interest.
The theory is complemented by experiments on real as well as semi-synthetic datasets. | Initializing Services in Interactive ML Systems for Diverse Users | [
"Avinandan Bose",
"Mihaela Curmei",
"Daniel L. Jiang",
"Jamie Heather Morgenstern",
"Sarah Dean",
"Lillian J. Ratliff",
"Maryam Fazel"
] | NeurIPS.cc/2024/Conference | 2312.11846 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HS0faHRhWD | @inproceedings{
liu2024timeffm,
title={Time-{FFM}: Towards {LM}-Empowered Federated Foundation Model for Time Series Forecasting},
author={Qingxiang Liu and Xu Liu and Chenghao Liu and Qingsong Wen and Yuxuan Liang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HS0faHRhWD}
} | Unlike natural language processing and computer vision, the development of Foundation Models (FMs) for time series forecasting is blocked due to data scarcity.
While recent efforts are focused on building such FMs by unlocking the potential of language models (LMs) for time series analysis, dedicated parameters for various downstream forecasting tasks need training, which hinders the common knowledge sharing across domains.
Moreover, data owners may hesitate to share the access to local data due to privacy concerns and copyright protection, which makes it impossible to simply construct a FM on cross-domain training instances.
To address these issues, we propose Time-FFM, a Federated Foundation Model for Time series forecasting by leveraging pretrained LMs.
Specifically, we begin by transforming time series into the modality of text tokens.
To bootstrap LMs for time series reasoning, we propose a prompt adaption module to determine domain-customized prompts dynamically instead of artificially.
Given the data heterogeneity across domains, we design a personalized federated training strategy by learning global encoders and local prediction heads.
Our comprehensive experiments indicate that Time-FFM outperforms state-of-the-arts and promises effective few-shot and zero-shot forecaster.
The code is available at https://github.com/CityMind-Lab/NeurIPS24-Time-FFM/tree/main. | Time-FFM: Towards LM-Empowered Federated Foundation Model for Time Series Forecasting | [
"Qingxiang Liu",
"Xu Liu",
"Chenghao Liu",
"Qingsong Wen",
"Yuxuan Liang"
] | NeurIPS.cc/2024/Conference | 2405.14252 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HRnSVflpgt | @inproceedings{
zhang2024schur,
title={Schur Nets: exploiting local structure for equivariance in higher order graph neural networks},
author={QINGQI ZHANG and Ruize Xu and Risi Kondor},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HRnSVflpgt}
} | Several recent works have shown that extending the message passing paradigm to subgraphs communicating with other subgraphs, especially via higher order messages, can boost the expressivity of graph neural networks. In such architectures, to faithfully account for local structure such as cycles, the local operations must be equivariant to the automorphism group of the local environment.
However, enumerating the automorphism groups of all subgraphs of interest and finding appropriate equivariant operations for each one of them separately is generally not feasible. In this paper we propose a solution to this problem based on spectral graph theory that bypasses
having to determine the automorphism group entirely and constructs a basis for equivariant operations directly from the graph Laplacian.
We show that empirically this approach can boost the performance of GNNs. | Schur Nets: exploiting local structure for equivariance in higher order graph neural networks | [
"QINGQI ZHANG",
"Ruize Xu",
"Risi Kondor"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HRkniCWM3E | @inproceedings{
gao2024neural,
title={Neural Pfaffians: Solving Many Many-Electron Schr\"odinger Equations},
author={Nicholas Gao and Stephan G{\"u}nnemann},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HRkniCWM3E}
} | Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost. Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently. Enforcing the permutation antisymmetry of electrons in such generalized neural wave functions remained challenging as existing methods require discrete orbital selection via non-learnable hand-crafted algorithms. This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules. We achieve this by relying on Pfaffians rather than Slater determinants. The Pfaffian allows us to enforce the antisymmetry on arbitrary electronic systems without any constraint on electronic spin configurations or molecular structure. Our empirical evaluation finds that a single neural Pfaffian calculates the ground state and ionization energies with chemical accuracy across various systems. On the TinyMol dataset, we outperform the `gold-standard' CCSD(T) CBS reference energies by 1.9m$E_h$ and reduce energy errors compared to previous generalized neural wave functions by up to an order of magnitude. | Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations | [
"Nicholas Gao",
"Stephan Günnemann"
] | NeurIPS.cc/2024/Conference | 2405.14762 | [
"https://github.com/n-gao/neural-pfaffian"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=HQgHCVZiHw | @inproceedings{
cao2024is,
title={Is Score Matching Suitable for Estimating Point Processes?},
author={Haoqun Cao and Zizhuo Meng and Tianjun Ke and Feng Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HQgHCVZiHw}
} | Score matching estimators for point processes have gained widespread attention in recent years because they do not require the calculation of intensity integrals, thereby effectively addressing the computational challenges in maximum likelihood estimation (MLE). Some existing works have proposed score matching estimators for point processes. However, this work demonstrates that the incompleteness of the estimators proposed in those works renders them applicable only to specific problems, and they fail for more general point processes. To address this issue, this work introduces the weighted score matching estimator to point processes. Theoretically, we prove the consistency of the estimator we propose. Experimental results indicate that our estimator accurately estimates model parameters on synthetic data and yields results consistent with MLE on real data. In contrast, existing score matching estimators fail to perform effectively. Codes are publicly available at \url{https://github.com/KenCao2007/WSM_TPP}. | Is Score Matching Suitable for Estimating Point Processes? | [
"Haoqun Cao",
"Zizhuo Meng",
"Tianjun Ke",
"Feng Zhou"
] | NeurIPS.cc/2024/Conference | 2410.04037 | [
"https://github.com/kencao2007/wsm_tpp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HPvIf4w5Dd | @inproceedings{
tuynman2024finding,
title={Finding good policies in average-reward Markov Decision Processes without prior knowledge},
author={Adrienne Tuynman and R{\'e}my Degenne and Emilie Kaufmann},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HPvIf4w5Dd}
} | We revisit the identification of an $\varepsilon$-optimal policy in average-reward Markov Decision Processes (MDP). In such MDPs, two measures of complexity have appeared in the literature: the diameter, $D$, and the optimal bias span, $H$, which satisfy $H\leq D$.
Prior work have studied the complexity of $\varepsilon$-optimal policy identification only when a generative model is available. In this case, it is known that there exists an MDP with $D \simeq H$ for which the sample complexity to output an $\varepsilon$-optimal policy is $\Omega(SAD/\varepsilon^2)$ where $S$ and $A$ are the sizes of the state and action spaces. Recently, an algorithm with a sample complexity of order $SAH/\varepsilon^2$ has been proposed, but it requires the knowledge of $H$. We first show that the sample complexity required to estimate $H$ is not bounded by any function of $S,A$ and $H$, ruling out the possibility to easily make the previous algorithm agnostic to $H$. By relying instead on a diameter estimation procedure, we propose the first algorithm for $(\varepsilon,\delta)$-PAC policy identification that does not need any form of prior knowledge on the MDP. Its sample complexity scales in $SAD/\varepsilon^2$ in the regime of small $\varepsilon$, which is near-optimal. In the online setting, our first contribution is a lower bound which implies that a sample complexity polynomial in $H$ cannot be achieved in this setting. Then, we propose an online algorithm with a sample complexity in $SAD^2/\varepsilon^2$, as well as a novel approach based on a data-dependent stopping rule that we believe is promising to further reduce this bound. | Finding good policies in average-reward Markov Decision Processes without prior knowledge | [
"Adrienne Tuynman",
"Rémy Degenne",
"Emilie Kaufmann"
] | NeurIPS.cc/2024/Conference | 2405.17108 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HOSh0SKklE | @inproceedings{
lang2024theoretical,
title={Theoretical Analysis of Weak-to-Strong Generalization},
author={Hunter Lang and David Sontag and Aravindan Vijayaraghavan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HOSh0SKklE}
} | Strong student models can learn from weaker teachers: when trained on the predictions of a weaker model, a strong pretrained student can learn to correct the weak model’s errors and generalize to examples where the teacher is not confident, even when these examples are excluded from training. This enables learning from cheap, incomplete, and possibly incorrect label information, such as coarse logical rules or the generations of a language model. We show that existing weak supervision theory results fail to account for both of these effects, which we call pseudolabel correction and coverage expansion, respectively. We give a new bound based on expansion properties of the data distribution and student hypothesis class that directly accounts for pseudolabel correction and coverage expansion. Our bound generalizes results from the co-training and self-training literature and captures the intuition that weak-to-strong generalization occurs when the mistakes of the weak model are hard for the strong model to fit without incurring additional error. We show that these expansion properties can be checked from finite data and give empirical evidence that they hold in practice. | Theoretical Analysis of Weak-to-Strong Generalization | [
"Hunter Lang",
"David Sontag",
"Aravindan Vijayaraghavan"
] | NeurIPS.cc/2024/Conference | 2405.16043 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HNH1ykRjXf | @inproceedings{
wu2024online,
title={Online Feature Updates Improve Online (Generalized) Label Shift Adaptation},
author={Ruihan Wu and Siddhartha Datta and Yi Su and Dheeraj Baby and Yu-Xiang Wang and Kilian Q Weinberger},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HNH1ykRjXf}
} | This paper addresses the prevalent issue of label shift in an online setting with missing labels, where data distributions change over time and obtaining timely labels is challenging. While existing methods primarily focus on adjusting or updating the final layer of a pre-trained classifier, we explore the untapped potential of enhancing feature representations using unlabeled data at test-time. Our novel method, Online Label Shift adaptation with Online Feature Updates (OLS-OFU), leverages self-supervised learning to refine the feature extraction process, thereby improving the prediction model. By carefully designing the algorithm, theoretically OLS-OFU maintains the similar online regret convergence to the results in the literature while taking the improved features into account. Empirically, it achieves substantial improvements over existing methods, which is as significant as the gains existing methods have over the baseline (i.e., without distribution shift adaptations). | Online Feature Updates Improve Online (Generalized) Label Shift Adaptation | [
"Ruihan Wu",
"Siddhartha Datta",
"Yi Su",
"Dheeraj Baby",
"Yu-Xiang Wang",
"Kilian Q Weinberger"
] | NeurIPS.cc/2024/Conference | 2402.03545 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HN05DQxyLl | @inproceedings{
gowri2024approximating,
title={Approximating mutual information of high-dimensional variables using learned representations},
author={Gokul Gowri and Xiaokang Lun and Allon M Klein and Peng Yin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HN05DQxyLl}
} | Mutual information (MI) is a general measure of statistical dependence with widespread application across the sciences. However, estimating MI between multi-dimensional variables is challenging because the number of samples necessary to converge to an accurate estimate scales unfavorably with dimensionality. In practice, existing techniques can reliably estimate MI in up to tens of dimensions, but fail in higher dimensions, where sufficient sample sizes are infeasible. Here, we explore the idea that underlying low-dimensional structure in high-dimensional data can be exploited to faithfully approximate MI in high-dimensional settings with realistic sample sizes. We develop a method that we call latent MI (LMI) approximation, which applies a nonparametric MI estimator to low-dimensional representations learned by a simple, theoretically-motivated model architecture. Using several benchmarks, we show that unlike existing techniques, LMI can approximate MI well for variables with $> 10^3$ dimensions if their dependence structure is captured by low-dimensional representations. Finally, we showcase LMI on two open problems in biology. First, we approximate MI between protein language model (pLM) representations of interacting proteins, and find that pLMs encode non-trivial information about protein-protein interactions. Second, we quantify cell fate information contained in single-cell RNA-seq (scRNA-seq) measurements of hematopoietic stem cells, and find a sharp transition during neutrophil differentiation when fate information captured by scRNA-seq increases dramatically. An implementation of LMI is available at *latentmi.readthedocs.io.* | Approximating mutual information of high-dimensional variables using learned representations | [
"Gokul Gowri",
"Xiaokang Lun",
"Allon M Klein",
"Peng Yin"
] | NeurIPS.cc/2024/Conference | 2409.02732 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=HGnxhHz6ss | @inproceedings{
hu2024learning,
title={Learning Image Priors Through Patch-Based Diffusion Models for Solving Inverse Problems},
author={Jason Hu and Bowen Song and Xiaojian Xu and Liyue Shen and Jeffrey A Fessler},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HGnxhHz6ss}
} | Diffusion models can learn strong image priors from underlying data distribution and use them to solve inverse problems,
but the training process is computationally expensive and requires lots of data.
Such bottlenecks prevent most existing works from being feasible for high-dimensional and high-resolution data such as 3D images.
This paper proposes a method to learn an efficient data prior for the entire image by training diffusion models only on patches of images.
Specifically, we propose a patch-based position-aware diffusion inverse solver, called PaDIS, where we obtain the score function of the whole image through scores of patches and their positional encoding and utilize this as the prior for solving inverse problems.
First of all, we show that this diffusion model achieves an improved memory efficiency and data efficiency
while still maintaining the capability to generate entire images via positional encoding.
Additionally, the proposed PaDIS model is highly flexible and can be plugged in with different diffusion inverse solvers (DIS).
We demonstrate that the proposed PaDIS approach enables solving various inverse problems in both natural and medical image domains, including CT reconstruction, deblurring, and superresolution, given only patch-based priors.
Notably, PaDIS outperforms previous DIS methods trained on entire image priors in the case of limited training data, demonstrating the data efficiency of our proposed approach by learning patch-based prior. | Learning Image Priors Through Patch-Based Diffusion Models for Solving Inverse Problems | [
"Jason Hu",
"Bowen Song",
"Xiaojian Xu",
"Liyue Shen",
"Jeffrey A Fessler"
] | NeurIPS.cc/2024/Conference | 2406.02462 | [
"https://github.com/sundeco/padis"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HGNTcy4eEp | @inproceedings{
jin2024learning,
title={Learning Group Actions on Latent Representations},
author={Yinzhu Jin and Aman Shrivastava and Tom Fletcher},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HGNTcy4eEp}
} | In this work, we introduce a new approach to model group actions in autoencoders. Diverging from prior research in this domain, we propose to learn the group actions on the latent space rather than strictly on the data space. This adaptation enhances the versatility of our model, enabling it to learn a broader range of scenarios prevalent in the real world, where groups can act on latent factors. Our method allows a wide flexibility in the encoder and decoder architectures and does not require group-specific layers. In addition, we show that our model theoretically serves as a superset of methods that learn group actions on the data space. We test our approach on five image datasets with diverse groups acting on them and demonstrate superior performance to recently proposed methods for modeling group actions. | Learning Group Actions on Latent Representations | [
"Yinzhu Jin",
"Aman Shrivastava",
"Tom Fletcher"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HFS800reZK | @inproceedings{
rozonoyer2024learning,
title={Learning Representations for Hierarchies with Minimal Support},
author={Benjamin Rozonoyer and Michael Boratko and Dhruvesh Patel and Wenlong Zhao and Shib Sankar Dasgupta and Hung Le and Andrew McCallum},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HFS800reZK}
} | When training node embedding models to represent large directed graphs (digraphs), it is impossible to observe all entries of the adjacency matrix during training. As a consequence most methods employ sampling. For very large digraphs, however, this means many (most) entries may be unobserved during training. In general, observing every entry would be necessary to uniquely identify a graph, however if we know the graph has a certain property some entries can be omitted - for example, only half the entries would be required for a symmetric graph.
In this work, we develop a novel framework to identify a subset of entries required to uniquely distinguish a graph among all transitively-closed DAGs. We give an explicit algorithm to compute the provably minimal set of entries, and demonstrate empirically that one can train node embedding models with greater efficiency and performance, provided the energy function has an appropriate inductive bias. We achieve robust performance on synthetic hierarchies and a larger real-world taxonomy, observing improved convergence rates in a resource-constrained setting while reducing the set of training examples by as much as 99%. | Learning Representations for Hierarchies with Minimal Support | [
"Benjamin Rozonoyer",
"Michael Boratko",
"Dhruvesh Patel",
"Wenlong Zhao",
"Shib Sankar Dasgupta",
"Hung Le",
"Andrew McCallum"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HDVsiUHQ1w | @inproceedings{
ragano2024scoreq,
title={{SCOREQ}: Speech Quality Assessment with Contrastive Regression},
author={Alessandro Ragano and Jan Skoglund and Andrew Hines},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HDVsiUHQ1w}
} | In this paper, we present SCOREQ, a novel approach for speech quality prediction. SCOREQ is a triplet loss function for contrastive regression that addresses the domain generalisation shortcoming exhibited by state of the art no-reference speech quality metrics. In the paper we: (i) illustrate the problem of L2 loss training failing at capturing the continuous nature of the mean opinion score (MOS) labels; (ii) demonstrate the lack of generalisation through a benchmarking evaluation across several speech domains; (iii) outline our approach and explore the impact of the architectural design decisions through incremental evaluation; (iv) evaluate the final model against state of the art models for a wide variety of data and domains. The results show that the lack of generalisation observed in state of the art speech quality metrics is addressed by SCOREQ. We conclude that using a triplet loss function for contrastive regression improves generalisation for speech quality prediction models but also has potential utility across a wide range of applications using regression-based predictive models. | SCOREQ: Speech Quality Assessment with Contrastive Regression | [
"Alessandro Ragano",
"Jan Skoglund",
"Andrew Hines"
] | NeurIPS.cc/2024/Conference | 2410.06675 | [
"https://github.com/alessandroragano/scoreq"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HCTikT7LS4 | @inproceedings{
young2024enhancing,
title={Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach},
author={Rory Young and Nicolas Pugeault},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HCTikT7LS4}
} | Deep reinforcement learning agents achieve state-of-the-art performance in a wide range of simulated control tasks. However, successful applications to real-world problems remain limited. One reason for this dichotomy is because the learnt policies are not robust to observation noise or adversarial attacks. In this paper, we investigate the robustness of deep RL policies to a single small state perturbation in deterministic continuous control tasks. We demonstrate that RL policies can be deterministically chaotic, as small perturbations to the system state have a large impact on subsequent state and reward trajectories. This unstable non-linear behaviour has two consequences: first, inaccuracies in sensor readings, or adversarial attacks, can cause significant performance degradation; second, even policies that show robust performance in terms of rewards may have unpredictable behaviour in practice. These two facets of chaos in RL policies drastically restrict the application of deep RL to real-world problems. To address this issue, we propose an improvement on the successful Dreamer V3 architecture, implementing Maximal Lyapunov Exponent regularisation. This new approach reduces the chaotic state dynamics, rendering the learnt policies more resilient to sensor noise or adversarial attacks and thereby improving the suitability of deep reinforcement learning for real-world applications. | Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach | [
"Rory Young",
"Nicolas Pugeault"
] | NeurIPS.cc/2024/Conference | 2410.10674 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HC6iqpPt3L | @inproceedings{
jain2024adaptive,
title={Adaptive Exploration for Data-Efficient General Value Function Evaluations},
author={Arushi Jain and Josiah P. Hanna and Doina Precup},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HC6iqpPt3L}
} | General Value Functions (GVFs) (Sutton et al., 2011) represent predictive knowledge in reinforcement learning. Each GVF computes the expected return for a given policy, based on a unique reward. Existing methods relying on fixed behavior policies or pre-collected data often face data efficiency issues when learning multiple GVFs in parallel using off-policy methods. To address this, we introduce *GVFExplorer*, which adaptively learns a single behavior policy that efficiently collects data for evaluating multiple GVFs in parallel. Our method optimizes the behavior policy by minimizing the total variance in return across GVFs, thereby reducing the required environmental interactions. We use an existing temporal-difference-style variance estimator to approximate the return variance. We prove that each behavior policy update decreases the overall mean squared error in GVF predictions. We empirically show our method's performance in tabular and nonlinear function approximation settings, including Mujoco environments, with stationary and non-stationary reward signals, optimizing data usage and reducing prediction errors across multiple GVFs. | Adaptive Exploration for Data-Efficient General Value Function Evaluations | [
"Arushi Jain",
"Josiah P. Hanna",
"Doina Precup"
] | NeurIPS.cc/2024/Conference | 2405.07838 | [
"https://github.com/arushijain94/explorationofgvfs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HBj86RMdZ8 | @inproceedings{
song2024the,
title={The Importance of Online Data: Understanding Preference Fine-tuning via Coverage},
author={Yuda Song and Gokul Swamy and Aarti Singh and Drew Bagnell and Wen Sun},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HBj86RMdZ8}
} | Learning from human preference data has emerged as the dominant paradigm for fine-tuning large language models (LLMs). The two most common families of techniques -- online reinforcement learning (RL) such as Proximal Policy Optimization (PPO) and offline contrastive methods such as Direct Preference Optimization (DPO) -- were positioned as equivalent in prior work due to the fact that both have to start from the same offline preference dataset. To further expand our theoretical understanding of the similarities and differences between online and offline techniques for preference fine-tuning, we conduct a rigorous analysis through the lens of *dataset coverage*, a concept that captures how the training data covers the test distribution and is widely used in RL. We prove that a global coverage condition is both necessary and sufficient for offline contrastive methods to converge to the optimal policy, but a weaker partial coverage condition suffices for online RL methods. This separation provides one explanation of why online RL methods can perform better than offline methods, especially when the offline preference data is not diverse enough. Finally, motivated by our preceding theoretical observations, we derive a hybrid preference optimization (HyPO) algorithm that uses offline data for contrastive-based preference optimization and online unlabeled data for KL regularization. Theoretically and empirically, we demonstrate that HyPO is more performant than its pure offline counterpart DPO, while still preserving its computation and memory efficiency. | The Importance of Online Data: Understanding Preference Fine-tuning via Coverage | [
"Yuda Song",
"Gokul Swamy",
"Aarti Singh",
"Drew Bagnell",
"Wen Sun"
] | NeurIPS.cc/2024/Conference | 2406.01462 | [
""
] | https://huggingface.co/papers/2406.01462 | 0 | 6 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=HB6KaCFiMN | @inproceedings{
jiang2024animated,
title={Animate3D: Animating Any 3D Model with Multi-view Video Diffusion},
author={Yanqin Jiang and Chaohui Yu and Chenjie Cao and Fan Wang and Weiming Hu and Jin Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HB6KaCFiMN}
} | Recent advances in 4D generation mainly focus on generating 4D content by distilling pre-trained text or single-view image conditioned models. It is inconvenient for them to take advantage of various off-the-shelf 3D assets with multi-view attributes, and their results suffer from spatiotemporal inconsistency owing to the inherent ambiguity in the supervision signals. In this work, we present Animate3D, a novel framework for animating any static 3D model. The core idea is two-fold: 1) We propose a novel multi-view video diffusion model (MV-VDM) conditioned on multi-view renderings of the static 3D object, which is trained on our presented large-scale multi-view video dataset (MV-Video). 2) Based on MV-VDM, we introduce a framework combining reconstruction and 4D Score Distillation Sampling (4D-SDS) to leverage the multi-view video diffusion priors for animating 3D objects. Specifically, for MV-VDM, we design a new spatiotemporal attention module to enhance spatial and temporal consistency by integrating 3D and video diffusion models. Additionally, we leverage the static 3D model’s multi-view renderings as conditions to preserve its identity. For animating 3D models, an effective two-stage pipeline is proposed: we first reconstruct coarse motions directly from generated multi-view videos, followed by the introduced 4D-SDS to model fine-level motions. Benefiting from accurate motion learning, we could achieve straightforward mesh animation. Qualitative and quantitative experiments demonstrate that Animate3D significantly outperforms previous approaches. Data, code, and models are open-released. | Animate3D: Animating Any 3D Model with Multi-view Video Diffusion | [
"Yanqin Jiang",
"Chaohui Yu",
"Chenjie Cao",
"Fan Wang",
"Weiming Hu",
"Jin Gao"
] | NeurIPS.cc/2024/Conference | 2407.11398 | [
""
] | https://huggingface.co/papers/2407.11398 | 1 | 8 | 2 | 6 | [
"yanqinJiang/animate3d"
] | [
"yanqinJiang/MV-Video"
] | [] | [
"yanqinJiang/animate3d"
] | [
"yanqinJiang/MV-Video"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=HAocQ9dSAX | @inproceedings{
chen2024dogs,
title={{DOGS}: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus},
author={Yu Chen and Gim Hee Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HAocQ9dSAX}
} | The recent advances in 3D Gaussian Splatting (3DGS) show promising results on the novel view synthesis (NVS) task. With its superior rendering performance and high-fidelity rendering quality, 3DGS is excelling at its previous NeRF counterparts. The most recent 3DGS method focuses either on improving the instability of rendering efficiency or reducing the model size. On the other hand, the training efficiency of 3DGS on large-scale scenes has not gained much attention. In this work, we propose DoGaussian, a method that trains 3DGS distributedly. Our method first decomposes a scene into $K$ blocks and then introduces the Alternating Direction Method of Multipliers (ADMM) into the training procedure of 3DGS. During training, our DoGaussian maintains one global 3DGS model on the master node and $K$ local 3DGS models on the slave nodes. The $K$ local 3DGS models are dropped after training and we only query the global 3DGS model during inference. The training time is reduced by scene decomposition, and the training convergence and stability are guaranteed through the consensus on the shared 3D Gaussians. Our method accelerates the training of 3DGS by $6+$ times when evaluated on large-scale scenes while concurrently achieving state-of-the-art rendering quality. Our code is publicly available at [https://github.com/AIBluefisher/DOGS](https://github.com/AIBluefisher/DOGS). | DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus | [
"Yu Chen",
"Gim Hee Lee"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/aibluefisher/dogs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HAcaANQNMK | @inproceedings{
sakr2024espace,
title={{ESPACE}: Dimensionality Reduction of Activations for Model Compression},
author={Charbel Sakr and Brucek Khailany},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=HAcaANQNMK}
} | We propose ESPACE, an LLM compression technique based on dimensionality reduction of activations. Unlike prior works on weight-centric tensor decomposition, ESPACE projects activations onto a pre-calibrated set of principal components. The activation-centrality of the approach enables retraining LLMs with no loss of expressivity; while at inference, weight decomposition is obtained as a byproduct of matrix multiplication associativity. Theoretical results on the construction of projection matrices with optimal computational accuracy are provided. Experimentally, we find ESPACE enables 50% compression of GPT3, Llama2, and Nemotron4 models with small accuracy degradation, as low as a 0.18 perplexity increase on GPT3-22B. At lower compression rates of 20% to 40%, ESPACE drives GPT3 models to outperforming their baseline, by up to a 0.38 decrease in perplexity for GPT3-8B. ESPACE also reduces GEMM execution time and prefill inference latency on existing hardware. Comparison with related works on compressing Llama2-7B via matrix factorization shows that ESPACE is a first step in advancing the state-of-the-art in tensor decomposition compression of LLMs. | ESPACE: Dimensionality Reduction of Activations for Model Compression | [
"Charbel Sakr",
"Brucek Khailany"
] | NeurIPS.cc/2024/Conference | 2410.05437 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=H7qVZ0Zu8E | @inproceedings{
kuruzov2024achieving,
title={Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization},
author={Ilya Kuruzov and Gesualdo Scutari and Alexander Gasnikov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H7qVZ0Zu8E}
} | This paper addresses the minimization of the sum of strongly convex, smooth
functions over a network of agents without a centralized server. Existing decentralized algorithms require knowledge of functions and network parameters, such as the Lipschitz constant of the global gradient and/or network connectivity, for
hyperparameter tuning. Agents usually cannot access this information, leading
to conservative selections and slow convergence or divergence. This paper introduces a decentralized algorithm that eliminates the need for specific parameter
tuning. Our approach employs an operator splitting technique with a novel variable
metric, enabling a local backtracking line-search to adaptively select the stepsize
without global information or extensive communications. This results in favorable
convergence guarantees and dependence on optimization and network parameters
compared to existing nonadaptive methods. Notably, our method is the first adaptive decentralized algorithm that achieves linear convergence for strongly convex,
smooth objectives. Preliminary numerical experiments support our theoretical
findings, demonstrating superior performance in convergence speed and scalability. | Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization | [
"Ilya Kuruzov",
"Gesualdo Scutari",
"Alexander Gasnikov"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=H7mENkYB2J | @inproceedings{
prillo2024ultrafast,
title={Ultrafast classical phylogenetic method beats large protein language models on variant effect prediction},
author={Sebastian Prillo and Wilson Y. Wu and Yun S. Song},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H7mENkYB2J}
} | Amino acid substitution rate matrices are fundamental to statistical phylogenetics and evolutionary biology. Estimating them typically requires reconstructed trees for massive amounts of aligned proteins, which poses a major computational bottleneck. In this paper, we develop a near linear-time method to estimate these rate matrices from multiple sequence alignments (MSAs) alone, thereby speeding up computation by orders of magnitude. Our method can be easily applied to MSAs with millions of sequences. On both simulated and real data, we demonstrate the speed and accuracy of our method as applied to the classical model of protein evolution. By leveraging the unprecedented scalability of our method, we develop a new, rich phylogenetic model called \textit{SiteRM}, which can estimate a general \textit{site-specific} rate matrix for each column of an MSA. Remarkably, in variant effect prediction for both clinical and deep mutational scanning data in ProteinGym, we show that despite being an independent-sites model, our SiteRM model outperforms large protein language models that learn complex residue-residue interactions between different sites. We attribute our increased performance to conceptual advances in our probabilistic treatment of evolutionary data and our ability to handle extremely large MSAs. We anticipate that our work will have a lasting impact across both statistical phylogenetics and computational variant effect prediction. | Ultrafast classical phylogenetic method beats large protein language models on variant effect prediction | [
"Sebastian Prillo",
"Wilson Y. Wu",
"Yun S. Song"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=H7SaaqfCUi | @inproceedings{
kostic2024learning,
title={Learning the Infinitesimal Generator of Stochastic Diffusion Processes},
author={Vladimir R Kostic and H{\'e}l{\`e}ne Halconruy and Timoth{\'e}e Devergne and Karim Lounici and Massimiliano Pontil},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H7SaaqfCUi}
} | We address data-driven learning of the infinitesimal generator of stochastic diffusion processes, essential for understanding numerical simulations of natural and physical systems. The unbounded nature of the generator poses significant challenges, rendering conventional analysis techniques for Hilbert-Schmidt operators ineffective. To overcome this, we introduce a novel framework based on the energy functional for these stochastic processes. Our approach integrates physical priors through an energy-based risk metric in both full and partial knowledge settings. We evaluate the statistical performance of a reduced-rank estimator in reproducing kernel Hilbert spaces (RKHS) in the partial knowledge setting. Notably, our approach provides learning bounds independent of the state space dimension and ensures non-spurious spectral estimation. Additionally, we elucidate how the distortion between the intrinsic energy-induced metric of the stochastic diffusion and the RKHS metric used for generator estimation impacts the spectral learning bounds. | Learning the Infinitesimal Generator of Stochastic Diffusion Processes | [
"Vladimir R Kostic",
"Hélène Halconruy",
"Timothée Devergne",
"Karim Lounici",
"Massimiliano Pontil"
] | NeurIPS.cc/2024/Conference | 2405.12940 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=H6C4p8Dir7 | @inproceedings{
wang2024omnitokenizer,
title={OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation},
author={Junke Wang and Yi Jiang and Zehuan Yuan and BINGYUE PENG and Zuxuan Wu and Yu-Gang Jiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H6C4p8Dir7}
} | Tokenizer, serving as a translator to map the intricate visual data into a compact latent space, lies at the core of visual generative models. Based on the finding that existing tokenizers are tailored to either image or video inputs, this paper presents OmniTokenizer, a transformer-based tokenizer for joint image and video tokenization. OmniTokenizer is designed with a spatial-temporal decoupled architecture, which integrates window attention and causal attention for spatial and temporal modeling, respectively. To exploit the complementary nature of image and video data, we further propose a progressive training strategy, where OmniTokenizer is first trained on image data on a fixed resolution to develop the spatial encoding capacity and then jointly trained on image and video data on multiple resolutions to learn the temporal dynamics. OmniTokenizer, for the first time, handles both image and video inputs within a unified framework and proves the possibility of realizing their synergy. Extensive experiments demonstrate that OmniTokenizer achieves state-of-the-art (SOTA) reconstruction performance on various image and video datasets, e.g., 1.11 reconstruction FID on ImageNet and 42 reconstruction FVD on UCF-101, beating the previous SOTA methods by 13% and 26%, respectively. Additionally, we also show that when integrated with OmniTokenizer, both language model-based approaches and diffusion models can realize advanced visual synthesis performance, underscoring the superiority and versatility of our method. | OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation | [
"Junke Wang",
"Yi Jiang",
"Zehuan Yuan",
"BINGYUE PENG",
"Zuxuan Wu",
"Yu-Gang Jiang"
] | NeurIPS.cc/2024/Conference | 2406.09399 | [
"https://github.com/foundationvision/omnitokenizer"
] | https://huggingface.co/papers/2406.09399 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=H5z0XqEX57 | @inproceedings{
miyagawa2024physicsinformed,
title={Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees},
author={Taiki Miyagawa and Takeru Yokota},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H5z0XqEX57}
} | We propose the first learning scheme for functional differential equations (FDEs).
FDEs play a fundamental role in physics, mathematics, and optimal control.
However, the numerical analysis of FDEs has faced challenges due to its unrealistic computational costs and has been a long standing problem over decades.
Thus, numerical approximations of FDEs have been developed, but they often oversimplify the solutions.
To tackle these two issues, we propose a hybrid approach combining physics-informed neural networks (PINNs) with the *cylindrical approximation*.
The cylindrical approximation expands functions and functional derivatives with an orthonormal basis and transforms FDEs into high-dimensional PDEs.
To validate the reliability of the cylindrical approximation for FDE applications, we prove the convergence theorems of approximated functional derivatives and solutions.
Then, the derived high-dimensional PDEs are numerically solved with PINNs.
Through the capabilities of PINNs, our approach can handle a broader class of functional derivatives more efficiently than conventional discretization-based methods, improving the scalability of the cylindrical approximation.
As a proof of concept, we conduct experiments on two FDEs and demonstrate that our model can successfully achieve typical $L^1$ relative error orders of PINNs $\sim 10^{-3}$.
Overall, our work provides a strong backbone for physicists, mathematicians, and machine learning experts to analyze previously challenging FDEs, thereby democratizing their numerical analysis, which has received limited attention. | Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees | [
"Taiki Miyagawa",
"Takeru Yokota"
] | NeurIPS.cc/2024/Conference | 2410.18153 | [
"https://github.com/taikimiyagawa/functionalpinn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=H3at5y8VFW | @inproceedings{
tang2024selfretrieval,
title={Self-Retrieval: End-to-End Information Retrieval with One Large Language Model},
author={Qiaoyu Tang and Jiawei Chen and Zhuoqun Li and Bowen Yu and Yaojie Lu and ChengFu and Haiyang Yu and Hongyu Lin and Fei Huang and Ben He and Xianpei Han and Le Sun and Yongbin Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H3at5y8VFW}
} | The rise of large language models (LLMs) has significantly transformed both the construction and application of information retrieval (IR) systems.
However, current interactions between IR systems and LLMs remain limited, with LLMs merely serving as part of components within IR systems, and IR systems being constructed independently of LLMs. This separated architecture restricts knowledge sharing and deep collaboration between them.
In this paper, we introduce Self-Retrieval, a novel end-to-end LLM-driven information retrieval architecture.
Self-Retrieval unifies all essential IR functions within a single LLM, leveraging the inherent capabilities of LLMs throughout the IR process.
Specifically, Self-Retrieval internalizes the retrieval corpus through self-supervised learning, transforms the retrieval process into sequential passage generation, and performs relevance assessment for reranking.
Experimental results demonstrate that Self-Retrieval not only outperforms existing retrieval approaches by a significant margin, but also substantially enhances the performance of LLM-driven downstream applications like retrieval-augmented generation. | Self-Retrieval: End-to-End Information Retrieval with One Large Language Model | [
"Qiaoyu Tang",
"Jiawei Chen",
"Zhuoqun Li",
"Bowen Yu",
"Yaojie Lu",
"ChengFu",
"Haiyang Yu",
"Hongyu Lin",
"Fei Huang",
"Ben He",
"Xianpei Han",
"Le Sun",
"Yongbin Li"
] | NeurIPS.cc/2024/Conference | 2403.00801 | [
"https://github.com/icip-cas/selfretrieval"
] | https://huggingface.co/papers/2403.00801 | 4 | 2 | 0 | 12 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=H2ATO32ilj | @inproceedings{
li2024art,
title={{ART}: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users},
author={Guanlin Li and Kangjie Chen and Shudong Zhang and Jie Zhang and Tianwei Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H2ATO32ilj}
} | Large-scale pre-trained generative models are taking the world by storm, due to their abilities in generating creative content. Meanwhile, safeguards for these generative models are developed, to protect users' rights and safety, most of which are designed for large language models. Existing methods primarily focus on jailbreak and adversarial attacks, which mainly evaluate the model's safety under malicious prompts. Recent work found that manually crafted safe prompts can unintentionally trigger unsafe generations. To further systematically evaluate the safety risks of text-to-image models, we propose a novel Automatic Red-Teaming framework, ART. Our method leverages both vision language model and large language model to establish a connection between unsafe generations and their prompts, thereby more efficiently identifying the model's vulnerabilities. With our comprehensive experiments, we reveal the toxicity of the popular open-source text-to-image models. The experiments also validate the effectiveness, adaptability, and great diversity of ART. Additionally, we introduce three large-scale red-teaming datasets for studying the safety risks associated with text-to-image models. Datasets and models can be found in https://github.com/GuanlinLee/ART. | ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users | [
"Guanlin Li",
"Kangjie Chen",
"Shudong Zhang",
"Jie Zhang",
"Tianwei Zhang"
] | NeurIPS.cc/2024/Conference | 2405.19360 | [
"https://github.com/guanlinlee/art"
] | https://huggingface.co/papers/2405.19360 | 0 | 0 | 0 | 5 | [
"AdamCodd/distilroberta-nsfw-prompt-stable-diffusion"
] | [] | [] | [
"AdamCodd/distilroberta-nsfw-prompt-stable-diffusion"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=H1NklRKPYi | @inproceedings{
zha2024lcm,
title={{LCM}: Locally Constrained Compact Point Cloud Model for Masked Point Modeling},
author={Yaohua Zha and Naiqi Li and Yanzi Wang and Tao Dai and Hang Guo and Bin Chen and Zhi Wang and Zhihao Ouyang and Shu-Tao Xia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H1NklRKPYi}
} | The pre-trained point cloud model based on Masked Point Modeling (MPM) has exhibited substantial improvements across various tasks. However, these models heavily rely on the Transformer, leading to quadratic complexity and limited decoder, hindering their practice application. To address this limitation, we first conduct a comprehensive analysis of existing Transformer-based MPM, emphasizing the idea that redundancy reduction is crucial for point cloud analysis. To this end, we propose a Locally constrained Compact point cloud Model (LCM) consisting of a locally constrained compact encoder and a locally constrained Mamba-based decoder. Our encoder replaces self-attention with our local aggregation layers to achieve an elegant balance between performance and efficiency. Considering the varying information density between masked and unmasked patches in the decoder inputs of MPM, we introduce a locally constrained Mamba-based decoder. This decoder ensures linear complexity while maximizing the perception of point cloud geometry information from unmasked patches with higher information density. Extensive experimental results show that our compact model significantly surpasses existing Transformer-based models in both performance and efficiency, especially our LCM-based Point-MAE model, compared to the Transformer-based model, achieved an improvement of 1.84%, 0.67%, and 0.60% in performance on the three variants of ScanObjectNN while reducing parameters by 88% and computation by 73%. The code is available at https://github.com/zyh16143998882/LCM. | LCM: Locally Constrained Compact Point Cloud Model for Masked Point Modeling | [
"Yaohua Zha",
"Naiqi Li",
"Yanzi Wang",
"Tao Dai",
"Hang Guo",
"Bin Chen",
"Zhi Wang",
"Zhihao Ouyang",
"Shu-Tao Xia"
] | NeurIPS.cc/2024/Conference | 2405.17149 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=H0qu4moFly | @inproceedings{
avdiukhin2024embedding,
title={Embedding Dimension of Contrastive Learning and \$k\$-Nearest Neighbors},
author={Dmitrii Avdiukhin and Vaggos Chatziafratis and Orr Fischer and Grigory Yaroslavtsev},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=H0qu4moFly}
} | We study the embedding dimension of distance comparison data in two settings: contrastive learning and $k$-nearest neighbors ($k$-NN). In both cases, the goal is to find the smallest dimension $d$ of an $\ell_p$-space in which a given dataset can be represented. We show that the arboricity of the associated graphs plays a key role in designing embeddings. Using this approach, for the most frequently used $\ell_2$-distance, we get matching upper and lower bounds in both settings.
In contrastive learning, we are given $m$ labeled samples of the form $(x_i, y_i^+, z_i^-)$ representing the fact that the positive example $y_i$ is closer to the anchor $x_i$ than the negative example $z_i$. We show that for representing such dataset in:
- $\ell_2$: $d = \Theta(\sqrt{m})$ is necessary and sufficient.
- $\ell_p$ for $p \ge 1$: $d = O(m)$ is sufficient and $d = \tilde \Omega(\sqrt{m})$ is necessary.
- $\ell_\infty$: $d = O(m^{2/3})$ is sufficient and $d = \tilde \Omega(\sqrt{m})$ is necessary.
We also give results for the more general scenario when $t$ negatives are allowed.
In $k$-NN, for each of the $n$ data points we are given an ordered set of the closest $k$ points. We show that for preserving the ordering of the $k$-NN for every point in:
- $\ell_2$: $d = \Theta(k)$ is necessary and sufficient.
- $\ell_p$ for $p \ge 1$: $d = \tilde O(k^2)$ is sufficient and $d=\tilde \Omega(k)$ is necessary.
- $\ell_\infty$ : $d = \tilde \Omega(k)$ is necessary.
Furthermore, if the goal is to not just preserve the ordering of the $k$-NN but also keep them as the nearest neighbors then $d = \tilde O (\mathrm{poly}(k))$ suffices in $\ell_p$ for $p \ge 1$. | Embedding Dimension of Contrastive Learning and k-Nearest Neighbors | [
"Dmitrii Avdiukhin",
"Vaggos Chatziafratis",
"Orr Fischer",
"Grigory Yaroslavtsev"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GxwnQ8sxkL | @inproceedings{
devulapalli2024learning,
title={Learning from Snapshots of Discrete and Continuous Data Streams},
author={Pramith Devulapalli and Steve Hanneke},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GxwnQ8sxkL}
} | Imagine a smart camera trap selectively clicking pictures to understand animal movement patterns within a particular habitat. These "snapshots", or pieces of data captured from a data stream at adaptively chosen times, provide a glimpse of different animal movements unfolding through time. Learning a continuous-time process through snapshots, such as smart camera traps, is a central theme governing a wide array of online learning situations. In this paper, we adopt a learning-theoretic perspective in understanding the fundamental nature of learning different classes of functions from both discrete data streams and continuous data streams. In our first framework, the *update-and-deploy* setting, a learning algorithm discretely queries from a process to update a predictor designed to make predictions given as input the data stream. We construct a uniform sampling algorithm that can learn with bounded error any concept class with finite Littlestone dimension. Our second framework, known as the *blind-prediction* setting, consists of a learning algorithm generating predictions independently of observing the process, only engaging with the process when it chooses to make queries. Interestingly, we show a stark contrast in learnability where non-trivial concept classes are unlearnable. However, we show that adaptive learning algorithms are necessary to learn sets of time-dependent and data-dependent functions, called pattern classes, in either framework. Finally, we develop a theory of pattern classes under discrete data streams for the blind-prediction setting. | Learning from Snapshots of Discrete and Continuous Data Streams | [
"Pramith Devulapalli",
"Steve Hanneke"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GxvDsFArxY | @inproceedings{
duan2024phylogen,
title={PhyloGen: Language Model-Enhanced Phylogenetic Inference via Graph Structure Generation},
author={ChenRui Duan and Zelin Zang and Siyuan Li and Yongjie Xu and Stan Z. Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GxvDsFArxY}
} | Phylogenetic trees elucidate evolutionary relationships among species, but phylogenetic inference remains challenging due to the complexity of combining continuous (branch lengths) and discrete parameters (tree topology).
Traditional Markov Chain Monte Carlo methods face slow convergence and computational burdens. Existing Variational Inference methods, which require pre-generated topologies and typically treat tree structures and branch lengths independently, may overlook critical sequence features, limiting their accuracy and flexibility.
We propose PhyloGen, a novel method leveraging a pre-trained genomic language model to generate and optimize phylogenetic trees without dependence on evolutionary models or aligned sequence constraints. PhyloGen views phylogenetic inference as a conditionally constrained tree structure generation problem, jointly optimizing tree topology and branch lengths through three core modules: (i) Feature Extraction, (ii) PhyloTree Construction, and (iii) PhyloTree Structure Modeling.
Meanwhile, we introduce a Scoring Function to guide the model towards a more stable gradient descent.
We demonstrate the effectiveness and robustness of PhyloGen on eight real-world benchmark datasets. Visualization results confirm PhyloGen provides deeper insights into phylogenetic relationships. | PhyloGen: Language Model-Enhanced Phylogenetic Inference via Graph Structure Generation | [
"ChenRui Duan",
"Zelin Zang",
"Siyuan Li",
"Yongjie Xu",
"Stan Z. Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GvQU54uA7u | @inproceedings{
shukla2024preferencebased,
title={Preference-based Pure Exploration},
author={Apurv Shukla and Debabrota Basu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GvQU54uA7u}
} | We study the preference-based pure exploration problem for bandits with vector-valued rewards and a set of preferences imposed over them. Specifically, we aim to identify the most preferred policy over a set of arms according to the preferences induced on the reward vectors by an ordering cone $C$. First, to quantify the impact of preferences, we derive a novel lower bound on the sample complexity for identifying the most preferred arm with confidence level $1-\delta$. Our lower bound shows that how the geometry of the preferences and reward vectors changes the hardness of this problem. We further explicate this geometry for Gaussian distributions of rewards, and provide a convex reformulation of the lower bound solvable with linear programming. Then, we leverage this convex reformulation of the lower bound to design the Track and Stop with Preferences (TSwP) algorithm that identifies the most preferred policy. Finally, we derive a new concentration result for vector-valued rewards, and show that TSwP achieves a matching sample complexity upper bound. | Preference-based Pure Exploration | [
"Apurv Shukla",
"Debabrota Basu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Gug7wc0BSs | @inproceedings{
hu2024valuebased,
title={Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training},
author={Pihe Hu and Shaolong Li and Zhuoran Li and Ling Pan and Longbo Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Gug7wc0BSs}
} | Deep Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous parameters in multi-agent scenarios, often incurring substantial computational overhead. Consequently, there is an urgent need to expedite training and enable model compression in MARL. This paper proposes the utilization of dynamic sparse training (DST), a technique proven effective in deep supervised learning tasks, to alleviate the computational burdens in MARL training. However, a direct adoption of DST fails to yield satisfactory MARL agents, leading to breakdowns in value learning within deep sparse value-based MARL models. Motivated by this challenge, we introduce an innovative Multi-Agent Sparse Training (MAST) framework aimed at simultaneously enhancing the reliability of learning targets and the rationality of sample distribution to improve value learning in sparse models. Specifically, MAST incorporates the Soft Mellowmax Operator with a hybrid TD-($\lambda$) schema to establish dependable learning targets. Additionally, it employs a dual replay buffer mechanism to enhance the distribution of training samples. Building upon these aspects, MAST utilizes gradient-based topology evolution to exclusively train multiple MARL agents using sparse networks. Our comprehensive experimental investigation across various value-based MARL algorithms on multiple benchmarks demonstrates, for the first time, significant reductions in redundancy of up to $20\times$ in Floating Point Operations (FLOPs) for both training and inference, with less than 3% performance degradation. | Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training | [
"Pihe Hu",
"Shaolong Li",
"Zhuoran Li",
"Ling Pan",
"Longbo Huang"
] | NeurIPS.cc/2024/Conference | 2409.19391 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GuY0zB2xVU | @inproceedings{
koupa{\"\i}2024boosting,
title={Boosting Generalization in Parametric {PDE} Neural Solvers through Adaptive Conditioning},
author={Armand Kassa{\"\i} Koupa{\"\i} and Jorge Mifsut Benet and Yuan Yin and Jean-No{\"e}l Vittaut and Patrick Gallinari},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GuY0zB2xVU}
} | Solving parametric partial differential equations (PDEs) presents significant challenges for data-driven methods due to the sensitivity of spatio-temporal dynamics to variations in PDE parameters. Machine learning approaches often struggle to capture this variability. To address this, data-driven approaches learn parametric PDEs by sampling a very large variety of trajectories with varying PDE parameters. We first show that incorporating conditioning mechanisms for learning parametric PDEs is essential and that among them, \textit{adaptive conditioning}, allows stronger generalization. As existing adaptive conditioning methods do not scale well with respect to the number of parameters to adapt in the neural solver, we propose GEPS, a simple adaptation mechanism to boost GEneralization in Pde Solvers via a first-order optimization and low-rank rapid adaptation of a small set of context parameters. We demonstrate the versatility of our approach for both fully data-driven and for physics-aware neural solvers. Validation performed on a whole range of spatio-temporal forecasting problems demonstrates excellent performance for generalizing to unseen conditions including initial conditions, PDE coefficients, forcing terms and solution domain. *Project page*: https://geps-project.github.io | Boosting Generalization in Parametric PDE Neural Solvers through Adaptive Conditioning | [
"Armand Kassaï Koupaï",
"Jorge Mifsut Benet",
"Yuan Yin",
"Jean-Noël Vittaut",
"Patrick Gallinari"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GtbwJ6mruI | @inproceedings{
yu2024skillaware,
title={Skill-aware Mutual Information Optimisation for Zero-shot Generalisation in Reinforcement Learning},
author={Xuehui Yu and Mhairi Dunion and Xin Li and Stefano V Albrecht},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GtbwJ6mruI}
} | Meta-Reinforcement Learning (Meta-RL) agents can struggle to operate across tasks with varying environmental features that require different optimal skills (i.e., different modes of behaviour). Using context encoders based on contrastive learning to enhance the generalisability of Meta-RL agents is now widely studied but faces challenges such as the requirement for a large sample size, also referred to as the $\log$-$K$ curse. To improve RL generalisation to different tasks, we first introduce Skill-aware Mutual Information (SaMI), an optimisation objective that aids in distinguishing context embeddings according to skills, thereby equipping RL agents with the ability to identify and execute different skills across tasks. We then propose Skill-aware Noise Contrastive Estimation (SaNCE), a $K$-sample estimator used to optimise the SaMI objective. We provide a framework for equipping an RL agent with SaNCE in practice and conduct experimental validation on modified MuJoCo and Panda-gym benchmarks. We empirically find that RL agents that learn by maximising SaMI achieve substantially improved zero-shot generalisation to unseen tasks. Additionally, the context encoder trained with SaNCE demonstrates greater robustness to a reduction in the number of available samples, thus possessing the potential to overcome the $\log$-$K$ curse. | Skill-aware Mutual Information Optimisation for Zero-shot Generalisation in Reinforcement Learning | [
"Xuehui Yu",
"Mhairi Dunion",
"Xin Li",
"Stefano V Albrecht"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GtEmIzLZmR | @inproceedings{
taufiq2024achievable,
title={Achievable Fairness on Your Data With Utility Guarantees},
author={Muhammad Faaiz Taufiq and Jean-Francois Ton and Yang Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GtEmIzLZmR}
} | In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy, a phenomenon known as the fairness-accuracy trade-off. The severity of this trade-off inherently depends on dataset characteristics such as dataset imbalances or biases and therefore, using a uniform fairness requirement across diverse datasets remains questionable. To address this, we present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets, backed by rigorous statistical guarantees. By utilizing the You-Only-Train-Once (YOTO) framework, our approach mitigates the computational burden of having to train multiple models when approximating the trade-off curve. Crucially, we introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness while avoiding false conclusions due to estimation errors. Our experiments spanning tabular (e.g., Adult), image (CelebA), and language (Jigsaw) datasets underscore that our approach not only reliably quantifies the optimum achievable trade-offs across various data modalities but also helps detect suboptimality in SOTA fairness methods. | Achievable Fairness on Your Data With Utility Guarantees | [
"Muhammad Faaiz Taufiq",
"Jean-Francois Ton",
"Yang Liu"
] | NeurIPS.cc/2024/Conference | 2402.17106 | [
"https://github.com/faaizt/datasetfairness"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GruuYVTGXV | @inproceedings{
li2024dual,
title={Dual Critic Reinforcement Learning under Partial Observability},
author={Jinqiu Li and Enmin Zhao and Tong Wei and Junliang Xing and Shiming Xiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GruuYVTGXV}
} | Partial observability in environments poses significant challenges that impede the formation of effective policies in reinforcement learning. Prior research has shown that borrowing the complete state information can enhance sample efficiency. This strategy, however, frequently encounters unstable learning with high variance in practical applications due to the over-reliance on complete information. This paper introduces DCRL, a Dual Critic Reinforcement Learning framework designed to adaptively harness full-state information during training to reduce variance for optimized online performance. In particular, DCRL incorporates two distinct critics: an oracle critic with access to complete state information and a standard critic functioning within the partially observable context. It innovates a synergistic strategy to meld the strengths of the oracle critic for efficiency improvement and the standard critic for variance reduction, featuring a novel mechanism for seamless transition and weighting between them. We theoretically prove that DCRL mitigates the learning variance while maintaining unbiasedness. Extensive experimental analyses across the Box2D and Box3D environments have verified DCRL's superior performance. The source code is available in the supplementary. | Dual Critic Reinforcement Learning under Partial Observability | [
"Jinqiu Li",
"Enmin Zhao",
"Tong Wei",
"Junliang Xing",
"Shiming Xiang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Grd7yzFm5V | @inproceedings{
ling2024bayesian,
title={Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing},
author={Yanfang Ling and Jiyong Li and Lingbo Li and Shangsong Liang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Grd7yzFm5V}
} | Recent methods are proposed to improve performance of domain adaptation by inferring domain index under an adversarial variational bayesian framework, where domain index is unavailable.
However, existing methods typically assume that the global domain indices are sampled from a vanilla gaussian prior, overlooking the inherent structures among different domains.
To address this challenge, we propose a Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing(GMDI) algorithm.
GMDI employs a Gaussian Mixture Model for domain indices, with the number of component distributions in the ``domain-themes'' space adaptively determined by a Chinese Restaurant Process.
By dynamically adjusting the mixtures at the domain indices level, GMDI significantly improves domain adaptation performance.
Our theoretical analysis demonstrates that GMDI achieves a more stringent evidence lower bound, closer to the log-likelihood.
For classification, GMDI outperforms all approaches, and surpasses the state-of-the-art method, VDI, by up to 3.4%, reaching 99.3%.
For regression, GMDI reduces MSE by up to 21% (from 3.160 to 2.493), achieving the lowest errors among all methods. | Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing | [
"Yanfang Ling",
"Jiyong Li",
"Lingbo Li",
"Shangsong Liang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GrMczQGTlA | @inproceedings{
radosavovic2024humanoid,
title={Humanoid Locomotion as Next Token Prediction},
author={Ilija Radosavovic and Jathushan Rajasegaran and Baifeng Shi and Bike Zhang and Sarthak Kamat and Koushil Sreenath and Trevor Darrell and Jitendra Malik},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GrMczQGTlA}
} | We cast real-world humanoid control as a next token prediction problem, akin to predicting the next word in language. Our model is a causal transformer trained via autoregressive prediction of sensorimotor sequences. To account for the multi-modal nature of the data, we perform prediction in a modality-aligned way, and for each input token predict the next token from the same modality. This general formulation enables us to leverage data with missing modalities, such as videos without actions. We train our model on a dataset of sequences from prior neural network policies, model-based controllers, motion capture, and YouTube videos of humans. We show that our model enables a real humanoid robot to walk in San Francisco zero-shot. Our model can transfer to the real world even when trained on only 27 hours of walking data, and can generalize to commands not seen during training. These findings suggest a promising path toward learning challenging real-world control tasks by generative modeling of sensorimotor sequences. | Humanoid Locomotion as Next Token Prediction | [
"Ilija Radosavovic",
"Jathushan Rajasegaran",
"Baifeng Shi",
"Bike Zhang",
"Sarthak Kamat",
"Koushil Sreenath",
"Trevor Darrell",
"Jitendra Malik"
] | NeurIPS.cc/2024/Conference | 2402.19469 | [
""
] | https://huggingface.co/papers/2402.19469 | 4 | 26 | 2 | 8 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=GqrWhROxrG | @inproceedings{
xu2024mvsdet,
title={{MVSD}et: Multi-View Indoor 3D Object Detection via Efficient Plane Sweeps},
author={Yating Xu and Chen Li and Gim Hee Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GqrWhROxrG}
} | The key challenge of multi-view indoor 3D object detection is to infer accurate geometry information from images for precise 3D detection. Previous method relies on NeRF for geometry reasoning. However, the geometry extracted from NeRF is generally inaccurate, which leads to sub-optimal detection performance. In this paper, we propose MVSDet which utilizes plane sweep for geometry-aware 3D object detection. To circumvent the requirement for a large number of depth planes for accurate depth prediction, we design a probabilistic sampling and soft weighting mechanism to decide the placement of pixel features on the 3D volume. We select multiple locations that score top in the probability volume for each pixel and use their probability score to indicate the confidence. We further apply recent pixel-aligned Gaussian Splatting to regularize depth prediction and improve detection performance with little computation overhead. Extensive experiments on ScanNet and ARKitScenes datasets are conducted to show the superiority of our model. Our code is available at https://github.com/Pixie8888/MVSDet. | MVSDet: Multi-View Indoor 3D Object Detection via Efficient Plane Sweeps | [
"Yating Xu",
"Chen Li",
"Gim Hee Lee"
] | NeurIPS.cc/2024/Conference | 2410.21566 | [
"https://github.com/pixie8888/mvsdet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Gqou8PRgWq | @inproceedings{
he2024shed,
title={{SHED}: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning},
author={Yexiao He and Ziyao Wang and Zheyu Shen and Guoheng Sun and Yucong Dai and Yongkai Wu and Hongyi Wang and Ang Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Gqou8PRgWq}
} | The pre-trained Large Language Models (LLMs) can be adapted for many downstream tasks and tailored to align with human preferences through fine-tuning. Recent studies have discovered that LLMs can achieve desirable performance with only a small amount of high-quality data, suggesting that a large portion of the data in these extensive datasets is redundant or even harmful. Identifying high-quality data from vast datasets to curate small yet effective datasets has emerged as a critical challenge. In this paper, we introduce SHED, an automated dataset refinement framework based on Shapley value for instruction fine-tuning. SHED eliminates the need for human intervention or the use of commercial LLMs. Moreover, the datasets curated through SHED exhibit transferability, indicating they can be reused across different LLMs with consistently high performance. We conduct extensive experiments to evaluate the datasets curated by SHED. The results demonstrate SHED's superiority over state-of-the-art methods across various tasks and LLMs; notably, datasets comprising only 10% of the original data selected by SHED achieve performance comparable to or surpassing that of the full datasets. | SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning | [
"Yexiao He",
"Ziyao Wang",
"Zheyu Shen",
"Guoheng Sun",
"Yucong Dai",
"Yongkai Wu",
"Hongyi Wang",
"Ang Li"
] | NeurIPS.cc/2024/Conference | 2405.00705 | [
"https://github.com/lucidreamer9/shed-shapley-based-automated-dataset-refinement"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GqefKjw1OR | @inproceedings{
b{\"o}ck2024sparse,
title={Sparse Bayesian Generative Modeling for Compressive Sensing},
author={Benedikt B{\"o}ck and Sadaf Syed and Wolfgang Utschick},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GqefKjw1OR}
} | This work addresses the fundamental linear inverse problem in compressive sensing (CS) by introducing a new type of regularizing generative prior. Our proposed method utilizes ideas from classical dictionary-based CS and, in particular, sparse Bayesian learning (SBL), to integrate a strong regularization towards sparse solutions. At the same time, by leveraging the notion of conditional Gaussianity, it also incorporates the adaptability from generative models to training data. However, unlike most state-of-the-art generative models, it is able to learn from a few compressed and noisy data samples and requires no optimization algorithm for solving the inverse problem. Additionally, similar to Dirichlet prior networks, our model parameterizes a conjugate prior enabling its application for uncertainty quantification. We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals. | Sparse Bayesian Generative Modeling for Compressive Sensing | [
"Benedikt Böck",
"Sadaf Syed",
"Wolfgang Utschick"
] | NeurIPS.cc/2024/Conference | 2411.09483 | [
"https://github.com/beneboeck/sparse-bayesian-gen-mod"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GproaSYZk5 | @inproceedings{
petrov2024universal,
title={Universal In-Context Approximation By Prompting Fully Recurrent Models},
author={Aleksandar Petrov and Tom A. Lamb and Alasdair Paren and Philip Torr and Adel Bibi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GproaSYZk5}
} | Zero-shot and in-context learning enable solving tasks without model fine-tuning, making them essential for developing generative model solutions. Therefore, it is crucial to understand whether a pretrained model can be prompted to approximate any function, i.e., whether it is a universal in-context approximator. While it was recently shown that transformer models do possess this property, these results rely on their attention mechanism. Hence, these findings do not apply to fully recurrent architectures like RNNs, LSTMs, and the increasingly popular SSMs. We demonstrate that RNNs, LSTMs, GRUs, Linear RNNs, and linear gated architectures such as Mamba and Hawk/Griffin can also serve be universal in-context approximators. To streamline our argument, we introduce a programming language called LSRL that compiles to these fully recurrent architectures. LSRL may be of independent interest for further studies of fully recurrent models, such as constructing interpretability benchmarks. We also study the role of multiplicative gating and observe that architectures incorporating such gating (e.g., LSTMs, GRUs, Hawk/Griffin) can implement certain operations more stably, making them more viable candidates for practical in-context universal approximation. | Universal In-Context Approximation By Prompting Fully Recurrent Models | [
"Aleksandar Petrov",
"Tom A. Lamb",
"Alasdair Paren",
"Philip Torr",
"Adel Bibi"
] | NeurIPS.cc/2024/Conference | 2406.01424 | [
"https://github.com/aleksandarpetrov/lsrl"
] | https://huggingface.co/papers/2406.01424 | 0 | 0 | 1 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GnaFrZRHPf | @inproceedings{
hong2024adaptive,
title={Adaptive Preference Scaling for Reinforcement Learning with Human Feedback},
author={Ilgee Hong and Zichong Li and Alexander Bukharin and Yixiao Li and Haoming Jiang and Tianbao Yang and Tuo Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GnaFrZRHPf}
} | Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values by learning rewards from human preference data. Due to various reasons, however, such data typically takes the form of rankings over pairs of trajectory segments, which fails to capture the varying strengths of preferences across different pairs. In this paper, we propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO), designed to address this uncertainty in preference strength. By incorporating an adaptive scaling parameter into the loss for each pair, our method increases the flexibility of the reward function. Specifically, it assigns small scaling parameters to pairs with ambiguous preferences, leading to more comparable rewards, and large scaling parameters to those with clear preferences for more distinct rewards. Computationally, our proposed loss function is strictly convex and univariate with respect to each scaling parameter, enabling its efficient optimization through a simple second-order algorithm. Our method is versatile and can be readily adapted to various preference optimization frameworks, including direct preference optimization (DPO). Our experiments with robotic control and natural language generation with large language models (LLMs) show that our method not only improves policy performance but also aligns reward function selection more closely with policy optimization, simplifying the hyperparameter tuning process. | Adaptive Preference Scaling for Reinforcement Learning with Human Feedback | [
"Ilgee Hong",
"Zichong Li",
"Alexander Bukharin",
"Yixiao Li",
"Haoming Jiang",
"Tianbao Yang",
"Tuo Zhao"
] | NeurIPS.cc/2024/Conference | 2406.02764 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GnF9tavqgc | @inproceedings{
ren2024physical,
title={Physical Consistency Bridges Heterogeneous Data in Molecular Multi-Task Learning},
author={Yuxuan Ren and Dihan Zheng and Chang Liu and Peiran Jin and Yu Shi and Lin Huang and Jiyan He and Shengjie Luo and Tao Qin and Tie-Yan Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GnF9tavqgc}
} | In recent years, machine learning has demonstrated impressive capability in handling molecular science tasks. To support various molecular properties at scale, machine learning models are trained in the multi-task learning paradigm. Nevertheless, data of different molecular properties are often not aligned: some quantities, e.g. equilibrium structure, demand more cost to compute than others, e.g. energy, so their data are often generated by cheaper computational methods at the cost of lower accuracy, which cannot be directly overcome through multi-task learning. Moreover, it is not straightforward to leverage abundant data of other tasks to benefit a particular task. To handle such data heterogeneity challenges, we exploit the specialty of molecular tasks that there are physical laws connecting them, and design consistency training approaches that allow different tasks to exchange information directly so as to improve one another. Particularly, we demonstrate that the more accurate energy data can improve the accuracy of structure prediction. We also find that consistency training can directly leverage force and off-equilibrium structure data to improve structure prediction, demonstrating a broad capability for integrating heterogeneous data. | Physical Consistency Bridges Heterogeneous Data in Molecular Multi-Task Learning | [
"Yuxuan Ren",
"Dihan Zheng",
"Chang Liu",
"Peiran Jin",
"Yu Shi",
"Lin Huang",
"Jiyan He",
"Shengjie Luo",
"Tao Qin",
"Tie-Yan Liu"
] | NeurIPS.cc/2024/Conference | 2410.10118 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GnAfyR8AhC | @inproceedings{
oh2024towards,
title={Towards Calibrated Robust Fine-Tuning of Vision-Language Models},
author={Changdae Oh and Hyesu Lim and Mijoo Kim and Dongyoon Han and Sangdoo Yun and Jaegul Choo and Alexander G Hauptmann and Zhi-Qi Cheng and Kyungwoo Song},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GnAfyR8AhC}
} | Improving out-of-distribution (OOD) generalization during in-distribution (ID) adaptation is a primary goal of robust fine-tuning of zero-shot models beyond naive fine-tuning. However, despite decent OOD generalization performance from recent robust fine-tuning methods, confidence calibration for reliable model output has not been fully addressed. This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models. Firstly, we show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data: 1) ID calibration error and 2) the smallest singular value of the ID input covariance matrix. Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value, which is further guided by the self-distillation of a moving-averaged model to achieve calibrated prediction as well. Starting from empirical evidence supporting our theoretical statements, we provide extensive experimental results on ImageNet distribution shift benchmarks that demonstrate the effectiveness of our theorem and its practical implementation. | Towards Calibrated Robust Fine-Tuning of Vision-Language Models | [
"Changdae Oh",
"Hyesu Lim",
"Mijoo Kim",
"Dongyoon Han",
"Sangdoo Yun",
"Jaegul Choo",
"Alexander G Hauptmann",
"Zhi-Qi Cheng",
"Kyungwoo Song"
] | NeurIPS.cc/2024/Conference | 2311.01723 | [
"https://github.com/MLAI-Yonsei/CaRot"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GmdGEF8xxU | @inproceedings{
zheng2024what,
title={What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks},
author={Yilun Zheng and Sitao Luan and Lihui Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GmdGEF8xxU}
} | Graph homophily refers to the phenomenon that connected nodes tend to share similar characteristics. Understanding this concept and its related metrics is crucial for designing effective Graph Neural Networks (GNNs). The most widely used homophily metrics, such as edge or node homophily, quantify such "similarity" as label consistency across the graph topology. These metrics are believed to be able to reflect the performance of GNNs, especially on node-level tasks. However, many recent studies have empirically demonstrated that the performance of GNNs does not always align with homophily metrics, and how homophily influences GNNs still remains unclear and controversial. Then, a crucial question arises: What is missing in our current understanding of homophily? To figure out the missing part, in this paper, we disentangle the graph homophily into three aspects: label, structural, and feature homophily, which are derived from the three basic elements of graph data. We argue that the synergy of the three homophily can provide a more comprehensive understanding of GNN performance. Our new proposed structural and feature homophily consider the neighborhood consistency and feature dependencies among nodes, addressing the previously overlooked structural and feature aspects in graph homophily. To investigate their synergy, we propose a Contextual Stochastic Block Model with three types of Homophily (CSBM-3H), where the topology and feature generation are controlled by the three metrics. Based on the theoretical analysis of CSBM-3H, we derive a new composite metric, named Tri-Hom, that considers all three aspects and overcomes the limitations of conventional homophily metrics. The theoretical conclusions and the effectiveness of Tri-Hom have been verified through synthetic experiments on CSBM-3H. In addition, we conduct experiments on $31$ real-world benchmark datasets and calculate the correlations between homophily metrics and model performance. Tri-Hom has significantly higher correlation values than $17$ existing metrics that only focus on a single homophily aspect, demonstrating its superiority and the importance of homophily synergy. Our code is available at https://github.com/zylMozart/Disentangle_GraphHom. | What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks | [
"Yilun Zheng",
"Sitao Luan",
"Lihui Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Glt37xoU7e | @inproceedings{
luo2024omnigrasp,
title={Omnigrasp: Simulated Humanoid Grasping on Diverse Objects},
author={Zhengyi Luo and Jinkun Cao and Sammy Christen and Alexander Winkler and Kris M. Kitani and Weipeng Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Glt37xoU7e}
} | We present a method for controlling a simulated humanoid to grasp an object and move it to follow an object's trajectory. Due to the challenges in controlling a humanoid with dexterous hands, prior methods often use a disembodied hand and only consider vertical lifts or short trajectories. This limited scope hampers their applicability for object manipulation required for animation and simulation. To close this gap, we learn a controller that can pick up a large number (>1200) of objects and carry them to follow randomly generated trajectories. Our key insight is to leverage a humanoid motion representation that provides human-like motor skills and significantly speeds up training. Using only simplistic reward, state, and object representations, our method shows favorable scalability on diverse objects and trajectories. For training, we do not need a dataset of paired full-body motion and object trajectories. At test time, we only require the object mesh and desired trajectories for grasping and transporting. To demonstrate the capabilities of our method, we show state-of-the-art success rates in following object trajectories and generalizing to unseen objects. Code and models will be released. | Omnigrasp: Simulated Humanoid Grasping on Diverse Objects | [
"Zhengyi Luo",
"Jinkun Cao",
"Sammy Christen",
"Alexander Winkler",
"Kris M. Kitani",
"Weipeng Xu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GlXUxNI6TN | @inproceedings{
marinescu2024abductive,
title={Abductive Reasoning in Logical Credal Networks},
author={Radu Marinescu and Junkyu Lee and Debarun Bhattacharjya and Fabio Cozman and Alexander G. Gray},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GlXUxNI6TN}
} | Logical Credal Networks or LCNs were recently introduced as a powerful probabilistic logic framework for representing and reasoning with imprecise knowledge. Unlike many existing formalisms, LCNs have the ability to represent cycles and allow specifying marginal and conditional probability bounds on logic formulae which may be important in many realistic scenarios. Previous work on LCNs has focused exclusively on marginal inference, namely computing posterior lower and upper probability bounds on a query formula. In this paper, we explore abductive reasoning tasks such as solving MAP and Marginal MAP queries in LCNs given some evidence. We first formally define the MAP and Marginal MAP tasks for LCNs and subsequently show how to solve these tasks exactly using search-based approaches. We then propose several approximate schemes that allow us to scale MAP and Marginal MAP inference to larger problem instances. An extensive empirical evaluation demonstrates the effectiveness of our algorithms on both random LCN instances as well as LCNs derived from more realistic use-cases. | Abductive Reasoning in Logical Credal Networks | [
"Radu Marinescu",
"Junkyu Lee",
"Debarun Bhattacharjya",
"Fabio Cozman",
"Alexander G. Gray"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GlD9Juva5V | @inproceedings{
lei2024songcreator,
title={SongCreator: Lyrics-based Universal Song Generation},
author={Shun Lei and Yixuan Zhou and Boshi Tang and Max W. Y. Lam and Feng liu and Hangyu Liu and Jingcheng Wu and Shiyin Kang and Zhiyong Wu and Helen M. Meng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GlD9Juva5V}
} | Music is an integral part of human culture, embodying human intelligence and creativity, of which songs compose an essential part. While various aspects of song generation have been explored by previous works, such as singing voice, vocal composition and instrumental arrangement, etc., generating songs with both vocals and accompaniment given lyrics remains a significant challenge, hindering the application of music generation models in the real world. In this light, we propose SongCreator, a song-generation system designed to tackle this challenge. The model features two novel designs: a meticulously designed dual-sequence language model (DSLM) to capture the information of vocals and accompaniment for song generation, and a series of attention mask strategies for DSLM, which allows our model to understand, generate and edit songs, making it suitable for various songrelated generation tasks by utilizing specific attention masks. Extensive experiments demonstrate the effectiveness of SongCreator by achieving state-of-the-art or competitive performances on all eight tasks. Notably, it surpasses previous works by a large margin in lyrics-to-song and lyrics-to-vocals. Additionally, it is able to independently control the acoustic conditions of the vocals and accompaniment in the generated song through different audio prompts, exhibiting its potential applicability. Our samples are available at https://thuhcsi.github.io/SongCreator/. | SongCreator: Lyrics-based Universal Song Generation | [
"Shun Lei",
"Yixuan Zhou",
"Boshi Tang",
"Max W. Y. Lam",
"Feng liu",
"Hangyu Liu",
"Jingcheng Wu",
"Shiyin Kang",
"Zhiyong Wu",
"Helen M. Meng"
] | NeurIPS.cc/2024/Conference | 2409.06029 | [
"https://github.com/lucidrains/vector-quantize-pytorch"
] | https://huggingface.co/papers/2409.06029 | 6 | 20 | 2 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GkzrVxs9LS | @inproceedings{
wang2024learning,
title={Learning Low-Rank Feature for Thorax Disease Classification},
author={Yancheng Wang and Rajeev Goel and Utkarsh Nath and Alvin C Silva and Teresa Wu and Yingzhen Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GkzrVxs9LS}
} | Deep neural networks, including Convolutional Neural Networks (CNNs) and Visual Transformers (ViT), have achieved stunning success in the medical image domain. We study thorax disease classification in this paper. Effective extraction of features for the disease areas is crucial for disease classification on radiographic images. While various neural architectures and training techniques, such as self-supervised learning with contrastive/restorative learning, have been employed for disease classification on radiographic images, there are no principled methods that can effectively reduce the adverse effect of noise and background or non-disease areas on the radiographic images for disease classification. To address this challenge, we propose a novel Low-Rank Feature Learning (LRFL) method in this paper, which is universally applicable to the training of all neural networks. The LRFL method is both empirically motivated by a Low Frequency Property (LFP) and theoretically motivated by our sharp generalization bound for neural networks with low-rank features. LFP not only widely exists in deep neural networks for generic machine learning but also exists in all the thorax medical datasets studied in this paper. In the empirical study, using a neural network such as a ViT or a CNN pre-trained on unlabeled chest X-rays by Masked Autoencoders (MAE), our novel LRFL method is applied on the pre-trained neural network and demonstrates better classification results in terms of both multi-class area under the receiver operating curve (mAUC) and classification accuracy than the current state-of-the-art. The code is available at https://github.com/Statistical-Deep-Learning/LRFL. | Learning Low-Rank Feature for Thorax Disease Classification | [
"Yancheng Wang",
"Rajeev Goel",
"Utkarsh Nath",
"Alvin C Silva",
"Teresa Wu",
"Yingzhen Yang"
] | NeurIPS.cc/2024/Conference | 2404.18933 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GkJbXpd3wM | @inproceedings{
nguyen2024active,
title={Active Set Ordering},
author={Quoc Phong Nguyen and Sunil Gupta and Svetha Venkatesh and Bryan Kian Hsiang Low and Patrick Jaillet},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GkJbXpd3wM}
} | In this paper, we formalize the active set ordering problem, which involves actively discovering a set of inputs based on their orderings determined by expensive evaluations of a blackbox function. We then propose the mean prediction (MP) algorithm and theoretically analyze it in terms of the regret of predicted pairwise orderings between inputs. Notably, as a special case of this framework, we can cast Bayesian optimization as an active set ordering problem by recognizing that maximizers can be identified solely by comparison rather than by precisely estimating the function evaluations. As a result, we are able to construct the popular Gaussian process upper confidence bound (GP-UCB) algorithm through the lens of ordering with several nuanced insights. We empirically validate the performance of our proposed solution using various synthetic functions and real-world datasets. | Active Set Ordering | [
"Quoc Phong Nguyen",
"Sunil Gupta",
"Svetha Venkatesh",
"Bryan Kian Hsiang Low",
"Patrick Jaillet"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GkHXBasQwm | @inproceedings{
xue2024hoiswap,
title={{HOI}-Swap: Swapping Objects in Videos with Hand-Object Interaction Awareness},
author={Zihui Xue and Mi Luo and Changan Chen and Kristen Grauman},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GkHXBasQwm}
} | We study the problem of precisely swapping objects in videos, with a focus on those interacted with by hands, given one user-provided reference object image. Despite the great advancements that diffusion models have made in video editing recently, these models often fall short in handling the intricacies of hand-object interactions (HOI), failing to produce realistic edits---especially when object swapping results in object shape or functionality changes. To bridge this gap, we present HOI-Swap, a novel diffusion-based video editing framework trained in a self-supervised manner. Designed in two stages, the first stage focuses on object swapping in a single frame with HOI awareness; the model learns to adjust the interaction patterns, such as the hand grasp, based on changes in the object's properties. The second stage extends the single-frame edit across the entire sequence; we achieve controllable motion alignment with the original video by: (1) warping a new sequence from the stage-I edited frame based on sampled motion points and (2) conditioning video generation on the warped sequence. Comprehensive qualitative and quantitative evaluations demonstrate that HOI-Swap significantly outperforms existing methods, delivering high-quality video edits with realistic HOIs. | HOI-Swap: Swapping Objects in Videos with Hand-Object Interaction Awareness | [
"Zihui Xue",
"Mi Luo",
"Changan Chen",
"Kristen Grauman"
] | NeurIPS.cc/2024/Conference | 2406.07754 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Gi00NVru6n | @inproceedings{
yang2024corda,
title={Cor{DA}: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning},
author={Yibo Yang and Xiaojie Li and Zhongzhu Zhou and Shuaiwen Leon Song and Jianlong Wu and Liqiang Nie and Bernard Ghanem},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Gi00NVru6n}
} | Current parameter-efficient fine-tuning (PEFT) methods build adapters widely agnostic of the context of downstream task to learn, or the context of important knowledge to maintain. As a result, there is often a performance gap compared to full-parameter fine-tuning, and meanwhile the fine-tuned model suffers from catastrophic forgetting of the pre-trained world knowledge. In this paper, we propose **CorDA**, a Context-oriented Decomposition Adaptation method that builds learnable **task-aware adapters** from weight decomposition oriented by the context of downstream task or the world knowledge to maintain. Concretely, we collect a few data samples, and perform singular value decomposition for each linear layer of a pre-trained LLM multiplied by the covariance matrix of the input activation using these samples. The inverse of the covariance matrix is multiplied with the decomposed components to reconstruct the original weights. By doing so, the context of the representative samples is captured through deciding the factorizing orientation. Our method enables two options, the **knowledge-preserved adaptation** and the **instruction-previewed adaptation**. For the former, we use question-answering samples to obtain the covariance matrices, and use the decomposed components with the smallest $r$ singular values to initialize a learnable adapter, with the others frozen such that the world knowledge is better preserved. For the latter, we use the instruction data from the fine-tuning task, such as math or coding, to orientate the decomposition and train the largest $r$ components that most correspond to the task to learn. We conduct extensive experiments on Math, Code, and Instruction Following tasks. Our knowledge-preserved adaptation not only achieves better performance than LoRA on fine-tuning tasks, but also mitigates the forgetting of world knowledge. Our instruction-previewed adaptation is able to further enhance the fine-tuning performance to be comparable with full fine-tuning, surpassing
the state-of-the-art PEFT methods such as LoRA, DoRA, and PiSSA. | CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning | [
"Yibo Yang",
"Xiaojie Li",
"Zhongzhu Zhou",
"Shuaiwen Leon Song",
"Jianlong Wu",
"Liqiang Nie",
"Bernard Ghanem"
] | NeurIPS.cc/2024/Conference | 2406.05223 | [
"https://github.com/iboing/corda"
] | https://huggingface.co/papers/2406.05223 | 2 | 3 | 0 | 7 | [
"iboing/CorDA_IPA_math_finetuned_math",
"iboing/CorDA_KPA_nqopen_finetuned_math"
] | [] | [] | [
"iboing/CorDA_IPA_math_finetuned_math",
"iboing/CorDA_KPA_nqopen_finetuned_math"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GhqdnLZMAz | @inproceedings{
sun2024improving,
title={Improving Decision Sparsity},
author={Yiyang Sun and Tong Wang and Cynthia Rudin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GhqdnLZMAz}
} | Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of *decision sparsity* called the *Sparse Explanation Value* (SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models. | Improving Decision Sparsity | [
"Yiyang Sun",
"Tong Wang",
"Cynthia Rudin"
] | NeurIPS.cc/2024/Conference | 2410.20483 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GgV6UczIWM | @inproceedings{
rende2024a,
title={A distributional simplicity bias in the learning dynamics of transformers},
author={Riccardo Rende and Federica Gerace and Alessandro Laio and Sebastian Goldt},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GgV6UczIWM}
} | The remarkable capability of over-parameterised neural networks to generalise effectively has been explained by invoking a ``simplicity bias'': neural networks prevent overfitting by initially learning simple classifiers before progressing to more complex, non-linear functions. While simplicity biases have been described theoretically and experimentally in feed-forward networks for supervised learning, the extent to which they also explain the remarkable success of transformers trained with self-supervised techniques remains unclear. In our study, we demonstrate that transformers, trained on natural language data, also display a simplicity bias. Specifically, they sequentially learn many-body interactions among input tokens, reaching a saturation point in the prediction error for low-degree interactions while continuing to learn high-degree interactions. To conduct this analysis, we develop a procedure to generate \textit{clones} of a given natural language data set, which rigorously capture the interactions between tokens up to a specified order. This approach opens up the possibilities of studying how interactions of different orders in the data affect learning, in natural language processing and beyond. | A distributional simplicity bias in the learning dynamics of transformers | [
"Riccardo Rende",
"Federica Gerace",
"Alessandro Laio",
"Sebastian Goldt"
] | NeurIPS.cc/2024/Conference | 2410.19637 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GgIJeoSLjQ | @inproceedings{
hu2024continuous,
title={Continuous Heatmap Regression for Pose Estimation via Implicit Neural Representation},
author={Shengxiang Hu and Huaijiang Sun and Dong Wei and Xiaoning Sun and Jin Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GgIJeoSLjQ}
} | Heatmap regression has dominated human pose estimation due to its superior performance and strong generalization. To meet the requirements of traditional explicit neural networks for output form, existing heatmap-based methods discretize the originally continuous heatmap representation into 2D pixel arrays, which leads to performance degradation due to the introduction of quantization errors. This problem is significantly exacerbated as the size of the input image decreases, which makes heatmap-based methods not much better than coordinate regression on low-resolution images. In this paper, we propose a novel neural representation for human pose estimation called NerPE to achieve continuous heatmap regression. Given any position within the image range, NerPE regresses the corresponding confidence scores for body joints according to the surrounding image features, which guarantees continuity in space and confidence during training. Thanks to the decoupling from spatial resolution, NerPE can output the predicted heatmaps at arbitrary resolution during inference without retraining, which easily achieves sub-pixel localization precision. To reduce the computational cost, we design progressive coordinate decoding to cooperate with continuous heatmap regression, in which localization no longer requires the complete generation of high-resolution heatmaps. The code is available at https://github.com/hushengxiang/NerPE. | Continuous Heatmap Regression for Pose Estimation via Implicit Neural Representation | [
"Shengxiang Hu",
"Huaijiang Sun",
"Dong Wei",
"Xiaoning Sun",
"Jin Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GeE5qF6ICg | @inproceedings{
goupy2024neuronal,
title={Neuronal Competition Groups with Supervised {STDP} for Spike-Based Classification},
author={Gaspard Goupy and Pierre Tirilly and Ioan Marius Bilasco},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GeE5qF6ICg}
} | Spike Timing-Dependent Plasticity (STDP) is a promising substitute to backpropagation for local training of Spiking Neural Networks (SNNs) on neuromorphic hardware. STDP allows SNNs to address classification tasks by combining unsupervised STDP for feature extraction and supervised STDP for classification. Unsupervised STDP is usually employed with Winner-Takes-All (WTA) competition to learn distinct patterns. However, WTA for supervised STDP classification faces unbalanced competition challenges. In this paper, we propose a method to effectively implement WTA competition in a spiking classification layer employing first-spike coding and supervised STDP training. We introduce the Neuronal Competition Group (NCG), an architecture that improves classification capabilities by promoting the learning of various patterns per class. An NCG is a group of neurons mapped to a specific class, implementing intra-class WTA and a novel competition regulation mechanism based on two-compartment thresholds. We incorporate our proposed architecture into spiking classification layers trained with state-of-the-art supervised STDP rules. On top of two different unsupervised feature extractors, we obtain significant accuracy improvements on image recognition datasets such as CIFAR-10 and CIFAR-100. We show that our competition regulation mechanism is crucial for ensuring balanced competition and improved class separation. | Neuronal Competition Groups with Supervised STDP for Spike-Based Classification | [
"Gaspard Goupy",
"Pierre Tirilly",
"Ioan Marius Bilasco"
] | NeurIPS.cc/2024/Conference | 2410.17066 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Gcks157FI3 | @inproceedings{
chen2024meshxl,
title={Mesh{XL}: Neural Coordinate Field for Generative 3D Foundation Models},
author={Sijin Chen and Xin Chen and Anqi Pang and Xianfang Zeng and Wei Cheng and Yijun Fu and Fukun Yin and Zhibin Wang and Jingyi Yu and Gang Yu and BIN FU and Tao Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Gcks157FI3}
} | The polygon mesh representation of 3D data exhibits great flexibility, fast rendering speed, and storage efficiency, which is widely preferred in various applications. However, given its unstructured graph representation, the direct generation of high-fidelity 3D meshes is challenging. Fortunately, with a pre-defined ordering strategy, 3D meshes can be represented as sequences, and the generation process can be seamlessly treated as an auto-regressive problem. In this paper, we validate Neural Coordinate Field (NeurCF), an explicit coordinate representation with implicit neural embeddings, is a simple-yet-effective representation for large-scale sequential mesh modeling. After that, we present MeshXL, a family of generative pre-trained auto-regressive models that addresses 3D mesh generation with modern large language model approaches. Extensive experiments show that MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications. | MeshXL: Neural Coordinate Field for Generative 3D Foundation Models | [
"Sijin Chen",
"Xin Chen",
"Anqi Pang",
"Xianfang Zeng",
"Wei Cheng",
"Yijun Fu",
"Fukun Yin",
"Zhibin Wang",
"Jingyi Yu",
"Gang Yu",
"BIN FU",
"Tao Chen"
] | NeurIPS.cc/2024/Conference | 2405.20853 | [
"https://github.com/openmeshlab/meshxl"
] | https://huggingface.co/papers/2405.20853 | 3 | 0 | 0 | 14 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GcZgo9ffGt | @inproceedings{
shi2024instruction,
title={Instruction Tuning With Loss Over Instructions},
author={Zhengyan Shi and Adam X. Yang and Bin Wu and Laurence Aitchison and Emine Yilmaz and Aldo Lipani},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GcZgo9ffGt}
} | Instruction tuning plays a crucial role in shaping the outputs of language models (LMs) to desired styles. In this work, we propose a simple yet effective method, Instruction Modelling (IM), which trains LMs by applying a loss function to the instruction and prompt part rather than solely to the output part. Through experiments across 21 diverse benchmarks, we show that, in many scenarios, IM can effectively improve the LM performance on both NLP tasks (*e.g.,* MMLU, TruthfulQA, and HumanEval) and open-ended generation benchmarks (*e.g.,* MT-Bench and AlpacaEval). Remarkably, in the most advantageous case, IM boosts model performance on AlpacaEval 1.0 by over 100%. We identify two key factors influencing the effectiveness of IM: (1) The ratio between instruction length and output length in the training data; and (2) The number of training examples. We observe that IM is especially beneficial when trained on datasets with lengthy instructions paired with brief outputs, or under the Superficial Alignment Hypothesis (SAH) where a small amount of training examples are used for instruction tuning. Further analysis substantiates our hypothesis that our improvement can be attributed to reduced overfitting to instruction tuning datasets. It is worth noting that we are not proposing \ours as a replacement for the current instruction tuning process.
Instead, our work aims to provide practical guidance for instruction tuning LMs, especially in low-resource scenarios.
Our code is available at https://github.com/ZhengxiangShi/InstructionModelling. | Instruction Tuning With Loss Over Instructions | [
"Zhengyan Shi",
"Adam X. Yang",
"Bin Wu",
"Laurence Aitchison",
"Emine Yilmaz",
"Aldo Lipani"
] | NeurIPS.cc/2024/Conference | 2405.14394 | [
"https://github.com/zhengxiangshi/instructionmodelling"
] | https://huggingface.co/papers/2405.14394 | 0 | 1 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GbqzN9HiUC | @inproceedings{
molinaro2024latent,
title={Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning},
author={Gaia Molinaro and C{\'e}dric Colas and Pierre-Yves Oudeyer and Anne Collins},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GbqzN9HiUC}
} | Humans are autotelic agents who learn by setting and pursuing their own goals. However, the precise mechanisms guiding human goal selection remain unclear. Learning progress, typically measured as the observed change in performance, can provide a valuable signal for goal selection in both humans and artificial agents. We hypothesize that human choices of goals may also be driven by _latent learning progress_, which humans can estimate through knowledge of their actions and the environment – even without experiencing immediate changes in performance. To test this hypothesis, we designed a hierarchical reinforcement learning task in which human participants (N = 175) repeatedly chose their own goals and learned goal-conditioned policies. Our behavioral and computational modeling results confirm the influence of latent learning progress on goal selection and uncover inter-individual differences, partially mediated by recognition of the task's hierarchical structure. By investigating the role of latent learning progress in human goal selection, we pave the way for more effective and personalized learning experiences as well as the advancement of more human-like autotelic machines. | Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning | [
"Gaia Molinaro",
"Cédric Colas",
"Pierre-Yves Oudeyer",
"Anne Collins"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Gb0mXhn5h3 | @inproceedings{
minai2024miso,
title={Mi{SO}: Optimizing brain stimulation to create neural activity states},
author={Yuki Minai and Joana Soldado-Magraner and Matthew A. Smith and Byron M. Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Gb0mXhn5h3}
} | Brain stimulation has the potential to create desired neural population activity states. However, it is challenging to search the large space of stimulation parameters, for example, selecting which subset of electrodes to be used for stimulation. In this scenario, creating a model that maps the configuration of stimulation parameters to the brain’s response can be beneficial. Training such an expansive model usually requires more stimulation-response samples than can be collected in a given experimental session. Furthermore, changes in the properties of the recorded activity over time can make it challenging to merge stimulation-response samples across sessions. To address these challenges, we propose MiSO (MicroStimulation Optimization), a closed-loop stimulation framework to drive neural population activity toward specified states by optimizing over a large stimulation parameter space. MiSO consists of three key components: 1) a neural activity alignment method to merge stimulation-response samples across sessions, 2) a statistical model trained on the merged samples to predict the brain's response to untested stimulation parameter configurations, and 3) an online optimization algorithm to adaptively update the stimulation parameter configuration based on the model's predictions. In this study, we implemented MiSO with a factor analysis (FA) based alignment method, a convolutional neural network (CNN), and an epsilon greedy optimization algorithm. We tested MiSO in closed-loop experiments using electrical microstimulation in the prefrontal cortex of a non-human primate. Guided by the CNN predictions, MiSO successfully searched amongst thousands of stimulation parameter configurations to drive the neural population activity toward specified states. More broadly, MiSO increases the clinical viability of neuromodulation technologies by enabling the use of many-fold larger stimulation parameter spaces. | MiSO: Optimizing brain stimulation to create neural activity states | [
"Yuki Minai",
"Joana Soldado-Magraner",
"Matthew A. Smith",
"Byron M. Yu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GZoAUVSkaw | @inproceedings{
yang2024firstorder,
title={First-Order Minimax Bilevel Optimization},
author={Yifan Yang and Zhaofeng Si and Siwei Lyu and Kaiyi Ji},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GZoAUVSkaw}
} | Multi-block minimax bilevel optimization has been studied recently due to its great potential in multi-task learning, robust machine learning, and few-shot learning. However, due to the complex three-level optimization structure, existing algorithms often suffer from issues such as high computing costs due to the second-order model derivatives or high memory consumption in storing all blocks' parameters. In this paper, we tackle these challenges by proposing two novel fully first-order algorithms named FOSL and MemCS. FOSL features a fully single-loop structure by updating all three variables simultaneously, and MemCS is a memory-efficient double-loop algorithm with cold-start initialization. We provide a comprehensive convergence analysis for both algorithms under full and partial block participation, and show that their sample complexities match or outperform those of the same type of methods in standard bilevel optimization. We evaluate our methods in two applications: the recently proposed multi-task deep AUC maximization and a novel rank-based robust meta-learning. Our methods consistently improve over existing methods with better performance over various datasets. | First-Order Minimax Bilevel Optimization | [
"Yifan Yang",
"Zhaofeng Si",
"Siwei Lyu",
"Kaiyi Ji"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GZnsqBwHAG | @inproceedings{
peng2024navigating,
title={Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models},
author={ShengYun Peng and Pin-Yu Chen and Matthew Daniel Hull and Duen Horng Chau},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GZnsqBwHAG}
} | Safety alignment is crucial to ensure that large language models (LLMs) behave in ways that align with human preferences and prevent harmful actions during inference. However, recent studies show that the alignment can be easily compromised through finetuning with only a few adversarially designed training examples. We aim to measure the risks in finetuning LLMs through navigating the LLM safety landscape. We discover a new phenomenon observed universally in the model parameter space of popular open-source LLMs, termed as “safety basin”: random perturbations to model weights maintain the safety level of the original aligned model within its local neighborhood. However, outside this local region, safety is fully compromised, exhibiting a sharp, step-like drop. This safety basin contrasts sharply with the LLM capability landscape, where model performance peaks at the origin and gradually declines as random perturbation increases. Our discovery inspires us to propose the new VISAGE safety metric that measures the safety in LLM finetuning by probing its safety landscape. Visualizing the safety landscape of the aligned model enables us to understand how finetuning compromises safety by dragging the model away from the safety basin. The LLM safety landscape also highlights the system prompt’s critical role in protecting a model, and that such protection transfers to its perturbed variants within the safety basin. These observations from our safety landscape research provide new
insights for future work on LLM safety community. Our code is publicly available at https://github.com/ShengYun-Peng/llm-landscape. | Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models | [
"ShengYun Peng",
"Pin-Yu Chen",
"Matthew Daniel Hull",
"Duen Horng Chau"
] | NeurIPS.cc/2024/Conference | 2405.17374 | [
"https://github.com/shengyun-peng/llm-landscape"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GYqs5Z4joA | @inproceedings{
guo2024spgesture,
title={SpGesture: Source-Free Domain-adaptive s{EMG}-based Gesture Recognition with Jaccard Attentive Spiking Neural Network},
author={Weiyu Guo and Ying Sun and Yijie Xu and Ziyue Qiao and Yongkui Yang and Hui Xiong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GYqs5Z4joA}
} | Surface electromyography (sEMG) based gesture recognition offers a natural and intuitive interaction modality for wearable devices. Despite significant advancements in sEMG-based gesture recognition models, existing methods often suffer from high computational latency and increased energy consumption. Additionally, the inherent instability of sEMG signals, combined with their sensitivity to distribution shifts in real-world settings, compromises model robustness.
To tackle these challenges, we propose a novel SpGesture framework based on Spiking Neural Networks, which possesses several unique merits compared with existing methods: (1) Robustness: By utilizing membrane potential as a memory list, we pioneer the introduction of Source-Free Domain Adaptation into SNN for the first time. This enables SpGesture to mitigate the accuracy degradation caused by distribution shifts. (2) High Accuracy: With a novel Spiking Jaccard Attention, SpGesture enhances the SNNs' ability to represent sEMG features, leading to a notable rise in system accuracy. To validate SpGesture's performance, we collected a new sEMG gesture dataset which has different forearm postures, where SpGesture achieved the highest accuracy among the baselines ($89.26\%$). Moreover, the actual deployment on the CPU demonstrated a latency below 100ms, well within real-time requirements. This impressive performance showcases SpGesture's potential to enhance the applicability of sEMG in real-world scenarios. The code is available at https://github.com/guoweiyu/SpGesture/. | SpGesture: Source-Free Domain-adaptive sEMG-based Gesture Recognition with Jaccard Attentive Spiking Neural Network | [
"Weiyu Guo",
"Ying Sun",
"Yijie Xu",
"Ziyue Qiao",
"Yongkui Yang",
"Hui Xiong"
] | NeurIPS.cc/2024/Conference | 2405.14398 | [
"https://github.com/guoweiyu/spgesture"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GYd5AfZaor | @inproceedings{
kim2024sample,
title={Sample Selection via Contrastive Fragmentation for Noisy Label Regression},
author={Chris Dongjoo Kim and Sangwoo Moon and Jihwan Moon and Dongyeon Woo and Gunhee Kim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GYd5AfZaor}
} | As with many other problems, real-world regression is plagued by the presence of noisy labels, an inevitable issue that demands our attention.
Fortunately, much real-world data often exhibits an intrinsic property of continuously ordered correlations between labels and features, where data points with similar labels are also represented with closely related features.
In response, we propose a novel approach named ConFrag, where we collectively model the regression data by transforming them into disjoint yet contrasting fragmentation pairs.
This enables the training of more distinctive representations, enhancing the ability to select clean samples.
Our ConFrag framework leverages a mixture of neighboring fragments to discern noisy labels through neighborhood agreement among expert feature extractors.
We extensively perform experiments on four newly curated benchmark datasets of diverse domains, including age prediction, price prediction, and music production year estimation.
We also introduce a metric called Error Residual Ratio (ERR) to better account for varying degrees of label noise.
Our approach consistently outperforms fourteen state-of-the-art baselines, being robust against symmetric and random Gaussian label noise. | Sample Selection via Contrastive Fragmentation for Noisy Label Regression | [
"Chris Dongjoo Kim",
"Sangwoo Moon",
"Jihwan Moon",
"Dongyeon Woo",
"Gunhee Kim"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GVlJVX3iiq | @inproceedings{
chen2024bridging,
title={Bridging Gaps: Federated Multi-View Clustering in Heterogeneous Hybrid Views},
author={Xinyue Chen and Yazhou Ren and Jie Xu and Fangfei Lin and Xiaorong Pu and Yang Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GVlJVX3iiq}
} | Recently, federated multi-view clustering (FedMVC) has emerged to explore cluster structures in multi-view data distributed on multiple clients. Many existing approaches tend to assume that clients are isomorphic and all of them belong to either single-view clients or multi-view clients. While these methods have succeeded, they may encounter challenges in practical FedMVC scenarios involving heterogeneous hybrid views, where a mixture of single-view and multi-view clients exhibit varying degrees of heterogeneity. In this paper, we propose a novel FedMVC framework, which concurrently addresses two challenges associated with heterogeneous hybrid views, i.e., client gap and view gap. To address the client gap, we design a local-synergistic contrastive learning approach that helps single-view clients and multi-view clients achieve consistency for mitigating heterogeneity among all clients. To address the view gap, we develop a global-specific weighting aggregation method, which encourages global models to learn complementary features from hybrid views. The interplay between local-synergistic contrastive learning and global-specific weighting aggregation mutually enhances the exploration of the data cluster structures distributed on multiple clients. Theoretical analysis and extensive experiments demonstrate that our method can handle the heterogeneous hybrid views in FedMVC and outperforms state-of-the-art methods. | Bridging Gaps: Federated Multi-View Clustering in Heterogeneous Hybrid Views | [
"Xinyue Chen",
"Yazhou Ren",
"Jie Xu",
"Fangfei Lin",
"Xiaorong Pu",
"Yang Yang"
] | NeurIPS.cc/2024/Conference | 2410.09484 | [
"https://github.com/5martina5/fmcsc"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GVgRbz8MvG | @inproceedings{
kumar2024nonparametric,
title={Nonparametric Evaluation of Noisy {ICA} Solutions},
author={Syamantak Kumar and Derek Bean and Peter Bickel and Purnamrita Sarkar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GVgRbz8MvG}
} | Independent Component Analysis (ICA) was introduced in the 1980's as a model for Blind Source Separation (BSS), which refers to the process of recovering the sources underlying a mixture of signals, with little knowledge about the source signals or the mixing process. While there are many sophisticated algorithms for estimation, different methods have different shortcomings. In this paper, we develop a nonparametric score to adaptively pick the right algorithm for ICA with arbitrary Gaussian noise. The novelty of this score stems from the fact that it just assumes a finite second moment of the data and uses the characteristic function to evaluate the quality of the estimated mixing matrix without any knowledge of the parameters of the noise distribution. In addition, we propose some new contrast functions and algorithms that enjoy the same fast computability as existing algorithms like FASTICA and JADE but work in domains where the former may fail. While these also may have weaknesses, our proposed diagnostic, as shown by our simulations, can remedy them. Finally, we propose a theoretical framework to analyze the local and global convergence properties of our algorithms. | Nonparametric Evaluation of Noisy ICA Solutions | [
"Syamantak Kumar",
"Derek Bean",
"Peter Bickel",
"Purnamrita Sarkar"
] | NeurIPS.cc/2024/Conference | 2401.08468 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GUccmOMBv6 | @inproceedings{
brandfonbrener2024colorfilter,
title={CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training},
author={David Brandfonbrener and Hanlin Zhang and Andreas Kirsch and Jonathan Richard Schwarz and Sham M. Kakade},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GUccmOMBv6}
} | Selecting high-quality data for pre-training is crucial in shaping the downstream task performance of language models. A major challenge lies in identifying this optimal subset, a problem generally considered intractable, thus necessitating scalable and effective heuristics. In this work, we propose a data selection method, CoLoR-Filter (Conditional Loss Reduction Filtering), which leverages an empirical Bayes-inspired approach to derive a simple and computationally efficient selection criterion based on the relative loss values of two auxiliary models.
In addition to the modeling rationale, we evaluate CoLoR-Filter empirically on two language modeling tasks: (1) selecting data from C4 for domain adaptation to evaluation on Books and (2) selecting data from C4 for a suite of downstream multiple-choice question answering tasks. We demonstrate favorable scaling both as we subselect more aggressively and using small auxiliary models to select data for large target models. As one headline result, CoLoR-Filter data selected using a pair of 150m parameter auxiliary models can train a 1.2b parameter target model to match a 1.2b parameter model trained on 25b randomly selected tokens with 25x less data for Books and 11x less data for the downstream tasks.
Code: https://github.com/davidbrandfonbrener/color-filter-olmo
Filtered data: https://huggingface.co/datasets/davidbrandfonbrener/color-filtered-c4 | CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training | [
"David Brandfonbrener",
"Hanlin Zhang",
"Andreas Kirsch",
"Jonathan Richard Schwarz",
"Sham M. Kakade"
] | NeurIPS.cc/2024/Conference | 2406.10670 | [
"https://github.com/davidbrandfonbrener/color-filter-olmo"
] | https://huggingface.co/papers/2406.10670 | 3 | 4 | 1 | 5 | [] | [
"davidbrandfonbrener/color-filtered-c4"
] | [] | [] | [
"davidbrandfonbrener/color-filtered-c4"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=GTDKo3Sv9p | @inproceedings{
gat2024discrete,
title={Discrete Flow Matching},
author={Itai Gat and Tal Remez and Neta Shaul and Felix Kreuk and Ricky T. Q. Chen and Gabriel Synnaeve and Yossi Adi and Yaron Lipman},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GTDKo3Sv9p}
} | Despite Flow Matching and diffusion models having emerged as powerful generative paradigms for continuous variables such as images and videos, their application to high-dimensional discrete data, such as language, is still limited. In this work, we present Discrete Flow Matching, a novel discrete flow paradigm designed specifically for generating discrete data. Discrete Flow Matching offers several key contributions: (i) it works with a general family of probability paths interpolating between source and target distributions; (ii) it allows for a generic formula for sampling from these probability paths using learned posteriors such as the probability denoiser ($x$-prediction) and noise-prediction ($\epsilon$-prediction); (iii) practically, focusing on specific probability paths defined with different schedulers improves generative perplexity compared to previous discrete diffusion and flow models; and (iv) by scaling Discrete Flow Matching models up to 1.7B parameters, we reach 6.7% Pass@1 and 13.4% Pass@10 on HumanEval and 6.7% Pass@1 and 20.6% Pass@10 on 1-shot MBPP coding benchmarks. Our approach is capable of generating high-quality discrete data in a non-autoregressive fashion, significantly closing the gap between autoregressive models and discrete flow models. | Discrete Flow Matching | [
"Itai Gat",
"Tal Remez",
"Neta Shaul",
"Felix Kreuk",
"Ricky T. Q. Chen",
"Gabriel Synnaeve",
"Yossi Adi",
"Yaron Lipman"
] | NeurIPS.cc/2024/Conference | 2407.15595 | [
""
] | https://huggingface.co/papers/2407.15595 | 5 | 11 | 2 | 8 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=GRmQjLzaPM | @inproceedings{
zhou2024behaviorgpt,
title={Behavior{GPT}: Smart Agent Simulation for Autonomous Driving with Next-Patch Prediction},
author={Zikang Zhou and Haibo HU and Xinhong Chen and Jianping Wang and Nan Guan and Kui Wu and Yung-Hui Li and Yu-Kai Huang and Chun Jason Xue},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GRmQjLzaPM}
} | Simulating realistic behaviors of traffic agents is pivotal for efficiently validating the safety of autonomous driving systems. Existing data-driven simulators primarily use an encoder-decoder architecture to encode the historical trajectories before decoding the future. However, the heterogeneity between encoders and decoders complicates the models, and the manual separation of historical and future trajectories leads to low data utilization. Given these limitations, we propose BehaviorGPT, a homogeneous and fully autoregressive Transformer designed to simulate the sequential behavior of multiple agents. Crucially, our approach discards the traditional separation between "history" and "future" by modeling each time step as the "current" one for motion generation, leading to a simpler, more parameter- and data-efficient agent simulator. We further introduce the Next-Patch Prediction Paradigm (NP3) to mitigate the negative effects of autoregressive modeling, in which models are trained to reason at the patch level of trajectories and capture long-range spatial-temporal interactions. Despite having merely 3M model parameters, BehaviorGPT won first place in the 2024 Waymo Open Sim Agents Challenge with a realism score of 0.7473 and a minADE score of 1.4147, demonstrating its exceptional performance in traffic agent simulation. | BehaviorGPT: Smart Agent Simulation for Autonomous Driving with Next-Patch Prediction | [
"Zikang Zhou",
"Haibo HU",
"Xinhong Chen",
"Jianping Wang",
"Nan Guan",
"Kui Wu",
"Yung-Hui Li",
"Yu-Kai Huang",
"Chun Jason Xue"
] | NeurIPS.cc/2024/Conference | 2405.17372 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GQrk0WGNiC | @inproceedings{
bu2024pretraining,
title={Pre-training Differentially Private Models with Limited Public Data},
author={Zhiqi Bu and Xinwei Zhang and Sheng Zha and Mingyi Hong and George Karypis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GQrk0WGNiC}
} | The superior performance of large foundation models can be attributed to the use of massive amounts of high-quality data. However, such datasets often contain sensitive, private and copyrighted material that requires formal protection. While differential privacy (DP) is a prominent method used to gauge the degree of security provided to large foundation models, its application in large foundation models has been met with limited success because there are often significant performance compromises when applying DP during the pre-training phase. Consequently, DP is more commonly implemented during the model fine-tuning stage, hence not capable of protecting a substantial portion of the data used during the initial pre-training process. In this work, we first provide a theoretical understanding of the efficacy of DP training by analyzing the per-iteration improvement of loss through the lens of the Hessian. We observe that DP optimizers' deceleration can be significantly mitigated by the use of limited public data, and thus propose the DP continual pre-training strategy. Our DP continual pre-training on vision models, using only 10% of public data, have achieved DP accuracy of 41.5% on ImageNet-21k (with epsilon=8) and non-DP accuracy of 55.7% on Places365 and 60.0% on iNaturalist-2021, which are on par with state-of-the-art standard pre-training and outperform existing DP pertained models. Our DP pre-trained models are released in *fastDP* library (https://github.com/awslabs/fast-differential-privacy/releases/tag/v2.1) | Pre-training Differentially Private Models with Limited Public Data | [
"Zhiqi Bu",
"Xinwei Zhang",
"Sheng Zha",
"Mingyi Hong",
"George Karypis"
] | NeurIPS.cc/2024/Conference | 2402.18752 | [
"https://github.com/awslabs/fast-differential-privacy"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GQNvvQquO0 | @inproceedings{
patel2024differentially,
title={Differentially Private Set Representations},
author={Sarvar Patel and Giuseppe Persiano and Joon Young Seo and Kevin Yeo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GQNvvQquO0}
} | We study the problem of differentially private (DP) mechanisms for representing
sets of size $k$ from a large universe.
Our first construction creates
$(\epsilon,\delta)$-DP representations with error probability of
$1/(e^\epsilon + 1)$ using space at most $1.05 k \epsilon \cdot \log(e)$ bits where
the time to construct a representation is $O(k \log(1/\delta))$ while decoding time is $O(\log(1/\delta))$.
We also present a second algorithm for pure $\epsilon$-DP representations with the same error using space at most $k \epsilon \cdot \log(e)$ bits, but requiring large decoding times.
Our algorithms match the lower bounds on privacy-utility trade-offs (including constants but ignoring $\delta$ factors) and we also present a new space lower bound
matching our constructions up to small constant factors.
To obtain our results, we design a new approach embedding sets into random linear systems
deviating from most prior approaches that inject noise into non-private solutions. | Differentially Private Set Representations | [
"Sarvar Patel",
"Giuseppe Persiano",
"Joon Young Seo",
"Kevin Yeo"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GOgKhunkfw | @inproceedings{
kim2024simulationfree,
title={Simulation-Free Training of Neural {ODE}s on Paired Data},
author={Semin Kim and Jaehoon Yoo and Jinwoo Kim and Yeonwoo Cha and Saehoon Kim and Seunghoon Hong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GOgKhunkfw}
} | In this work, we investigate a method for simulation-free training of Neural Ordinary Differential Equations (NODEs) for learning deterministic mappings between paired data. Despite the analogy of NODEs as continuous-depth residual networks, their application in typical supervised learning tasks has not been popular, mainly due to the large number of function evaluations required by ODE solvers and numerical instability in gradient estimation. To alleviate this problem, we employ the flow matching framework for simulation-free training of NODEs, which directly regresses the parameterized dynamics function to a predefined target velocity field. Contrary to generative tasks, however, we show that applying flow matching directly between paired data can often lead to an ill-defined flow that breaks the coupling of the data pairs (e.g., due to crossing trajectories). We propose a simple extension that applies flow matching in the embedding space of data pairs, where the embeddings are learned jointly with the dynamic function to ensure the validity of the flow which is also easier to learn. We demonstrate the effectiveness of our method on both regression and classification tasks, where our method outperforms existing NODEs with a significantly lower number of function evaluations. The code is available at https://github.com/seminkim/simulation-free-node. | Simulation-Free Training of Neural ODEs on Paired Data | [
"Semin Kim",
"Jaehoon Yoo",
"Jinwoo Kim",
"Yeonwoo Cha",
"Saehoon Kim",
"Seunghoon Hong"
] | NeurIPS.cc/2024/Conference | 2410.22918 | [
"https://github.com/seminkim/simulation-free-node"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GNhrGRCerd | @inproceedings{
liu2024trapmid,
title={Trap-{MID}: Trapdoor-based Defense against Model Inversion Attacks},
author={Zhen-Ting Liu and Shang-Tse Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GNhrGRCerd}
} | Model Inversion (MI) attacks pose a significant threat to the privacy of Deep Neural Networks by recovering training data distribution from well-trained models. While existing defenses often rely on regularization techniques to reduce information leakage, they remain vulnerable to recent attacks. In this paper, we propose the Trapdoor-based Model Inversion Defense (Trap-MID) to mislead MI attacks. A trapdoor is integrated into the model to predict a specific label when the input is injected with the corresponding trigger. Consequently, this trapdoor information serves as the "shortcut" for MI attacks, leading them to extract trapdoor triggers rather than private data. We provide theoretical insights into the impacts of trapdoor's effectiveness and naturalness on deceiving MI attacks. In addition, empirical experiments demonstrate the state-of-the-art defense performance of Trap-MID against various MI attacks without the requirements for extra data or large computational overhead. Our source code is publicly available at https://github.com/ntuaislab/Trap-MID. | Trap-MID: Trapdoor-based Defense against Model Inversion Attacks | [
"Zhen-Ting Liu",
"Shang-Tse Chen"
] | NeurIPS.cc/2024/Conference | 2411.08460 | [
"https://github.com/ntuaislab/trap-mid"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GNSMl1P5VR | @inproceedings{
hu2024visual,
title={Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models},
author={Yushi Hu and Weijia Shi and Xingyu Fu and Dan Roth and Mari Ostendorf and Luke Zettlemoyer and Noah A. Smith and Ranjay Krishna},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GNSMl1P5VR}
} | Humans draw to facilitate reasoning: we draw auxiliary lines when solving geometry problems; we mark and circle when reasoning on maps; we use sketches to amplify our ideas and relieve our limited-capacity working memory. However, such actions are missing in current multimodal language models (LMs). Current chain-of-thought and tool-use paradigms only use text as intermediate reasoning steps. In this work, we introduce Sketchpad, a framework that gives multimodal LMs a visual sketchpad and tools to draw on the sketchpad. The LM conducts planning and reasoning according to the visual artifacts it has drawn. Different from prior work, which uses text-to-image models to enable LMs to draw, Sketchpad enables LMs to draw with lines, boxes, marks, etc., which is closer to human sketching and better facilitates reasoning. \name can also use specialist vision models during the sketching process (e.g., draw bounding boxes with object detection models, draw masks with segmentation models), to further enhance visual perception and reasoning. We experiment on a wide range of math tasks (including geometry, functions, graph, chess) and complex visual reasoning tasks. Sketchpad substantially improves performance on all tasks over strong base models with no sketching, yielding an average gain of 12.7% on math tasks, and 8.6% on vision tasks. GPT-4o with Sketchpad sets a new state of the art on all tasks, including V*Bench (80.3%), BLINK spatial reasoning (83.9%), and visual correspondence (80.8%). We will release all code and data. | Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models | [
"Yushi Hu",
"Weijia Shi",
"Xingyu Fu",
"Dan Roth",
"Mari Ostendorf",
"Luke Zettlemoyer",
"Noah A. Smith",
"Ranjay Krishna"
] | NeurIPS.cc/2024/Conference | 2406.09403 | [
""
] | https://huggingface.co/papers/2406.09403 | 5 | 19 | 1 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GN2qbxZlni | @inproceedings{
zeng2024mrben,
title={{MR}-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in {LLM}s},
author={Zhongshen Zeng and Yinhong Liu and Yingjia Wan and Jingyao Li and Pengguang Chen and Jianbo Dai and Yuxuan Yao and Rongwu Xu and Zehan Qi and Wanru Zhao and Linling Shen and Jianqiao Lu and Haochen Tan and Yukang Chen and Hao Zhang and Zhan Shi and Bailin Wang and Zhijiang Guo and Jiaya Jia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GN2qbxZlni}
} | Large language models (LLMs) have shown increasing capability in problem-solving and decision-making, largely based on the step-by-step chain-of-thought reasoning processes. However, evaluating these reasoning abilities has become increasingly challenging. Existing outcome-based benchmarks are beginning to saturate, becoming less effective in tracking meaningful progress. To address this, we present a process-based benchmark MR-Ben that demands a meta-reasoning skill, where LMs are asked to locate and analyse potential errors in automatically generated reasoning steps. Our meta-reasoning paradigm is especially suited for system-2 slow thinking, mirroring the human cognitive process of carefully examining assumptions, conditions, calculations, and logic to identify mistakes. MR-Ben comprises 5,975 questions curated by human experts across a wide range of subjects, including physics, chemistry, logic, coding, and more. Through our designed metrics for assessing meta-reasoning on this benchmark, we identify interesting limitations and weaknesses of current LLMs (open-source and closed-source models). For example, with models like the o1 series from OpenAI demonstrating strong performance by effectively scrutinizing the solution space, many other state-of-the-art models fall significantly behind on MR-Ben, exposing potential shortcomings in their training strategies and inference methodologies. | MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs | [
"Zhongshen Zeng",
"Yinhong Liu",
"Yingjia Wan",
"Jingyao Li",
"Pengguang Chen",
"Jianbo Dai",
"Yuxuan Yao",
"Rongwu Xu",
"Zehan Qi",
"Wanru Zhao",
"Linling Shen",
"Jianqiao Lu",
"Haochen Tan",
"Yukang Chen",
"Hao Zhang",
"Zhan Shi",
"Bailin Wang",
"Zhijiang Guo",
"Jiaya Jia"
] | NeurIPS.cc/2024/Conference | 2406.13975 | [
""
] | https://huggingface.co/papers/2406.13975 | 2 | 0 | 0 | 19 | [] | [
"Randolphzeng/Mr-Ben"
] | [] | [] | [
"Randolphzeng/Mr-Ben"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=GN2GXjPyN8 | @inproceedings{
zhou2024antigenspecific,
title={Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization},
author={Xiangxin Zhou and Dongyu Xue and Ruizhe Chen and Zaixiang Zheng and Liang Wang and Quanquan Gu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GN2GXjPyN8}
} | Antibody design, a crucial task with significant implications across various disciplines such as therapeutics and biology, presents considerable challenges due to its intricate nature. In this paper, we tackle antigen-specific antibody sequence-structure co-design as an optimization problem towards specific preferences, considering both rationality and functionality. Leveraging a pre-trained conditional diffusion model that jointly models sequences and structures of antibodies with equivariant neural networks, we propose direct energy-based preference optimization to guide the generation of antibodies with both rational structures and considerable binding affinities to given antigens. Our method involves fine-tuning the pre-trained diffusion model using a residue-level decomposed energy preference. Additionally, we employ gradient surgery to address conflicts between various types of energy, such as attraction and repulsion. Experiments on RAbD benchmark show that our approach effectively optimizes the energy of generated antibodies and achieves state-of-the-art performance in designing high-quality antibodies with low total energy and high binding affinity simultaneously, demonstrating the superiority of our approach. | Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization | [
"Xiangxin Zhou",
"Dongyu Xue",
"Ruizhe Chen",
"Zaixiang Zheng",
"Liang Wang",
"Quanquan Gu"
] | NeurIPS.cc/2024/Conference | 2403.16576 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GMsi9966DR | @inproceedings{
tao2024deepite,
title={Deep{ITE}: Designing Variational Graph Autoencoders for Intervention Target Estimation},
author={Hongyuan Tao and Hang Yu and Jianguo Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GMsi9966DR}
} | Intervention Target Estimation (ITE) is vital for both understanding and decision-making in complex systems, yet it remains underexplored. Current ITE methods are hampered by their inability to learn from distinct intervention instances collaboratively and to incorporate rich insights from labeled data, which leads to inefficiencies such as the need for re-estimation of intervention targets with minor data changes or alterations in causal graphs. In this paper, we propose DeepITE, an innovative deep learning framework designed around a variational graph autoencoder. DeepITE can concurrently learn from both unlabeled and labeled data with different intervention targets and causal graphs, harnessing correlated information in a self or semi-supervised manner. The model's inference capabilities allow for the immediate identification of intervention targets on unseen samples and novel causal graphs, circumventing the need for retraining. Our extensive testing confirms that DeepITE not only surpasses 13 baseline methods in the Recall@k metric but also demonstrates expeditious inference times, particularly on large graphs. Moreover, incorporating a modest fraction of labeled data (5-10\%) substantially enhances DeepITE's performance, further solidifying its practical applicability. Our source code is available at https://github.com/alipay/DeepITE. | DeepITE: Designing Variational Graph Autoencoders for Intervention Target Estimation | [
"Hongyuan Tao",
"Hang Yu",
"Jianguo Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GLUIuli3Sm | @inproceedings{
haimovich2024on,
title={On the Convergence of Loss and Uncertainty-based Active Learning Algorithms},
author={Daniel Haimovich and Dima Karamshuk and Fridolin Linder and Niek Tax and Milan Vojnovic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GLUIuli3Sm}
} | We investigate the convergence rates and data sample sizes required for training a machine learning model using a stochastic gradient descent (SGD) algorithm, where data points are sampled based on either their loss value or uncertainty value. These training methods are particularly relevant for active learning and data subset selection problems. For SGD with a constant step size update, we present convergence results for linear classifiers and linearly separable datasets using squared hinge loss and similar training loss functions. Additionally, we extend our analysis to more general classifiers and datasets, considering a wide range of loss-based sampling strategies and smooth convex training loss functions. We propose a novel algorithm called Adaptive-Weight Sampling (AWS) that utilizes SGD with an adaptive step size that achieves stochastic Polyak's step size in expectation. We establish convergence rate results for AWS for smooth convex training loss functions. Our numerical experiments demonstrate the efficiency of AWS on various datasets by using either exact or estimated loss values. | On the Convergence of Loss and Uncertainty-based Active Learning Algorithms | [
"Daniel Haimovich",
"Dima Karamshuk",
"Fridolin Linder",
"Niek Tax",
"Milan Vojnovic"
] | NeurIPS.cc/2024/Conference | 2312.13927 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GJMYvWzjE1 | @inproceedings{
he2024language,
title={Language Models as Hierarchy Encoders},
author={Yuan He and Moy Yuan and Jiaoyan Chen and Ian Horrocks},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GJMYvWzjE1}
} | Interpreting hierarchical structures latent in language is a key limitation of current language models (LMs). While previous research has implicitly leveraged these hierarchies to enhance LMs, approaches for their explicit encoding are yet to be explored. To address this, we introduce a novel approach to re-train transformer encoder-based LMs as Hierarchy Transformer encoders (HiTs), harnessing the expansive nature of hyperbolic space. Our method situates the output embedding space of pre-trained LMs within a Poincaré ball with a curvature that adapts to the embedding dimension, followed by re-training on hyperbolic clustering and centripetal losses. These losses are designed to effectively cluster related entities (input as texts) and organise them hierarchically. We evaluate HiTs against pre-trained LMs, standard fine-tuned LMs, and several hyperbolic embedding baselines, focusing on their capabilities in simulating transitive inference, predicting subsumptions, and transferring knowledge across hierarchies. The results demonstrate that HiTs consistently outperform all baselines in these tasks, underscoring the effectiveness and transferability of our re-trained hierarchy encoders. | Language Models as Hierarchy Encoders | [
"Yuan He",
"Moy Yuan",
"Jiaoyan Chen",
"Ian Horrocks"
] | NeurIPS.cc/2024/Conference | 2401.11374 | [
"https://github.com/krr-oxford/hierarchytransformers"
] | https://huggingface.co/papers/2401.11374 | 0 | 0 | 0 | 4 | [
"Hierarchy-Transformers/HiT-MiniLM-L12-WordNetNoun",
"Hierarchy-Transformers/HiT-MiniLM-L6-WordNetNoun",
"Hierarchy-Transformers/HiT-MPNet-WordNetNoun",
"Hierarchy-Transformers/HiT-MiniLM-L12-SnomedCT"
] | [
"Hierarchy-Transformers/WordNetNoun"
] | [] | [
"Hierarchy-Transformers/HiT-MiniLM-L12-WordNetNoun",
"Hierarchy-Transformers/HiT-MiniLM-L6-WordNetNoun",
"Hierarchy-Transformers/HiT-MPNet-WordNetNoun",
"Hierarchy-Transformers/HiT-MiniLM-L12-SnomedCT"
] | [
"Hierarchy-Transformers/WordNetNoun"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=GJ0qIevGjD | @inproceedings{
chen2024beyond,
title={Beyond Efficiency: Molecular Data Pruning for Enhanced Generalization},
author={Dingshuo Chen and Zhixun Li and Yuyan Ni and Guibin Zhang and Ding Wang and Qiang Liu and Shu Wu and Jeffrey Xu Yu and Liang Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GJ0qIevGjD}
} | With the emergence of various molecular tasks and massive datasets, how to perform efficient training has become an urgent yet under-explored issue in the area. Data pruning (DP), as an oft-stated approach to saving training burdens, filters out less influential samples to form a coreset for training. However, the increasing reliance on pretrained models for molecular tasks renders traditional in-domain DP methods incompatible. Therefore, we propose a **Mol**ecular data **P**runing framework for **e**nhanced **G**eneralization (**MolPeg**), which focuses on the source-free data pruning scenario, where data pruning is applied with pretrained models. By maintaining two models with different updating paces during training, we introduce a novel scoring function to measure the informativeness of samples based on the loss discrepancy. As a plug-and-play framework, MolPeg realizes the perception of both source and target domain and consistently outperforms existing DP methods across four downstream tasks. Remarkably, it can surpass the performance obtained from full-dataset training, even when pruning up to 60-70% of the data on HIV and PCBA dataset. Our work suggests that the discovery of effective data-pruning metrics could provide a viable path to both enhanced efficiency and superior generalization in transfer learning. | Beyond Efficiency: Molecular Data Pruning for Enhanced Generalization | [
"Dingshuo Chen",
"Zhixun Li",
"Yuyan Ni",
"Guibin Zhang",
"Ding Wang",
"Qiang Liu",
"Shu Wu",
"Jeffrey Xu Yu",
"Liang Wang"
] | NeurIPS.cc/2024/Conference | 2409.01081 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GHqw3xLAvd | @inproceedings{
clayton2024differentiable,
title={Differentiable Quantum Computing for Large-scale Linear Control},
author={Connor Clayton and Jiaqi Leng and Gengzhi Yang and Yi-Ling Qiao and Ming Lin and Xiaodi Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GHqw3xLAvd}
} | As industrial models and designs grow increasingly complex, the demand for optimal control of large-scale dynamical systems has significantly increased. However, traditional methods for optimal control incur significant overhead as problem dimensions grow. In this paper, we introduce an end-to-end quantum algorithm for linear-quadratic control with provable speedups. Our algorithm, based on a policy gradient method, incorporates a novel quantum subroutine for solving the matrix Lyapunov equation. Specifically, we build a *quantum-assisted differentiable simulator* for efficient gradient estimation that is more accurate and robust than classical methods relying on stochastic approximation. Compared to the classical approaches, our method achieves a *super-quadratic* speedup. To the best of our knowledge, this is the first end-to-end quantum application to linear control problems with provable quantum advantage. | Differentiable Quantum Computing for Large-scale Linear Control | [
"Connor Clayton",
"Jiaqi Leng",
"Gengzhi Yang",
"Yi-Ling Qiao",
"Ming Lin",
"Xiaodi Wu"
] | NeurIPS.cc/2024/Conference | 2411.01391 | [
"https://github.com/YilingQiao/diff_lqr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GEbnPxD9EF | @inproceedings{
tan2024consistency,
title={Consistency of Neural Causal Partial Identification},
author={Jiyuan Tan and Jose Blanchet and Vasilis Syrgkanis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GEbnPxD9EF}
} | Recent progress in Neural Causal Models (NCMs) showcased how identification and partial identification of causal effects can be automatically carried out via training of neural generative models that respect the constraints encoded in a given causal graph [Xia et al. 2022, Balazadeh et al. 2022]. However, formal consistency of these methods has only been proven for the case of discrete variables or only for linear causal models. In this work, we prove the consistency of partial identification via NCMs in a general setting with both continuous and categorical variables. Further, our results highlight the impact of the design of the underlying neural network architecture in terms of depth and connectivity as well as the importance of applying Lipschitz regularization in the training phase. In particular, we provide a counterexample showing that without Lipschitz regularization this method may not be asymptotically consistent. Our results are enabled by new results on the approximability of Structural Causal Models (SCMs) via neural generative models, together with an analysis of the sample complexity of the resulting architectures and how that translates into an error in the constrained optimization problem that defines the partial identification bounds. | Consistency of Neural Causal Partial Identification | [
"Jiyuan Tan",
"Jose Blanchet",
"Vasilis Syrgkanis"
] | NeurIPS.cc/2024/Conference | 2405.15673 | [
"https://github.com/jiyuan-tan/neuralpartialid"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GDz8rkfikp | @inproceedings{
bui2024erasing,
title={Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation},
author={Anh Tuan Bui and Long Tung Vuong and Khanh Doan and Trung Le and Paul Montague and Tamas Abraham and Dinh Phung},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GDz8rkfikp}
} | Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts. Prior approaches have tried to balance this by introducing a loss term to preserve neutral content or a regularization term to minimize changes in the model parameters, yet resolving this trade-off remains challenging. In this work, we propose to identify and preserving concepts most affected by parameter changes, termed as *adversarial concepts*. This approach ensures stable erasure with minimal impact on the other concepts. We demonstrate the effectiveness of our method using the Stable Diffusion model, showing that it outperforms state-of-the-art erasure methods in eliminating unwanted content while maintaining the integrity of other unrelated elements. Our code is available at \url{https://github.com/tuananhbui89/Erasing-Adversarial-Preservation}. | Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation | [
"Anh Tuan Bui",
"Long Tung Vuong",
"Khanh Doan",
"Trung Le",
"Paul Montague",
"Tamas Abraham",
"Dinh Phung"
] | NeurIPS.cc/2024/Conference | 2410.15618 | [
"https://github.com/tuananhbui89/erasing-adversarial-preservation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GDNZajKrML | @inproceedings{
yong2024glnerf,
title={{GL}-Ne{RF}: Gauss-Laguerre Quadrature Enables Training-Free Ne{RF} Acceleration},
author={Silong Yong and Yaqi Xie and Simon Stepputtis and Katia P. Sycara},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GDNZajKrML}
} | Volume rendering in neural radiance fields is inherently time-consuming due to the large number of MLP calls on the points sampled per ray. Previous works would address this issue by introducing new neural networks or data structures. In this work, we propose GL-NeRF, a new perspective of computing volume rendering with the Gauss-Laguerre quadrature. GL-NeRF significantly reduces the number of MLP calls needed for volume rendering, introducing no additional data structures or neural networks. The simple formulation makes adopting GL-NeRF in any NeRF model possible. In the paper, we first justify the use of the Gauss-Laguerre quadrature and then demonstrate this plug-and-play attribute by implementing it in two different NeRF models. We show that with a minimal drop in performance, GL-NeRF can significantly reduce the number of MLP calls, showing the potential to speed up any NeRF model. Code can be found in project page https://silongyong.github.io/GL-NeRF_project_page/. | GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration | [
"Silong Yong",
"Yaqi Xie",
"Simon Stepputtis",
"Katia P. Sycara"
] | NeurIPS.cc/2024/Conference | 2410.19831 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GCmmy4At6i | @inproceedings{
tong2024lightweight,
title={Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation},
author={Jintao Tong and Yixiong Zou and Yuhua Li and Ruixuan Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GCmmy4At6i}
} | Cross-domain few-shot segmentation (CD-FSS) is proposed to first pre-train the model on a large-scale source-domain dataset, and then transfer the model to data-scarce target-domain datasets for pixel-level segmentation. The significant domain gap between the source and target datasets leads to a sharp decline in the performance of existing few-shot segmentation (FSS) methods in cross-domain scenarios. In this work, we discover an intriguing phenomenon: simply filtering different frequency components for target domains can lead to a significant performance improvement, sometimes even as high as 14% mIoU. Then, we delve into this phenomenon for an interpretation, and find such improvements stem from the reduced inter-channel correlation in feature maps, which benefits CD-FSS with enhanced robustness against domain gaps and larger activated regions for segmentation. Based on this, we propose a lightweight frequency masker, which further reduces channel correlations by an amplitude-phase-masker (APM) module and an Adaptive Channel Phase Attention (ACPA) module. Notably, APM introduces only 0.01% additional parameters but improves the average performance by over 10%, and ACPA imports only 2.5% parameters but further improves the performance by over 1.5%, which significantly surpasses the state-of-the-art CD-FSS methods. | Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation | [
"Jintao Tong",
"Yixiong Zou",
"Yuhua Li",
"Ruixuan Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GB5a0RRYuv | @inproceedings{
ye2024construction,
title={Construction and Application of Materials Knowledge Graph in Multidisciplinary Materials Science via Large Language Model},
author={Yanpeng Ye and Jie Ren and Shaozhou Wang and Yuwei Wan and Imran Razzak and Bram Hoex and Haofen Wang and Tong Xie and Wenjie Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GB5a0RRYuv}
} | Knowledge in materials science is widely dispersed across extensive scientific literature, posing significant challenges for efficient discovery and integration of new materials. Traditional methods, often reliant on costly and time-consuming experimental approaches, further complicate rapid innovation. Addressing these challenges, the integration of artificial intelligence with materials science has opened avenues for accelerating the discovery process, though it also demands precise annotation, data extraction, and traceability of information. To tackle these issues, this article introduces the Materials Knowledge Graph (MKG), which utilizes advanced natural language processing techniques, integrated with large language models to extract and systematically organize a decade's worth of high-quality research into structured triples, contains 162,605 nodes and 731,772 edges. MKG categorizes information into comprehensive labels such as Name, Formula, and Application, structured around a meticulously designed ontology, thus enhancing data usability and integration. By implementing network-based algorithms, MKG not only facilitates efficient link prediction but also significantly reduces reliance on traditional experimental methods. This structured approach not only streamlines materials research but also lays the groundwork for more sophisticated materials knowledge graphs. | Construction and Application of Materials Knowledge Graph in Multidisciplinary Materials Science via Large Language Model | [
"Yanpeng Ye",
"Jie Ren",
"Shaozhou Wang",
"Yuwei Wan",
"Imran Razzak",
"Bram Hoex",
"Haofen Wang",
"Tong Xie",
"Wenjie Zhang"
] | NeurIPS.cc/2024/Conference | 2404.03080 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GA8TVtxudf | @inproceedings{
zhao2024metric,
title={Metric from Human: Zero-shot Monocular Metric Depth Estimation via Test-time Adaptation},
author={Yizhou Zhao and Hengwei Bian and Kaihua Chen and Pengliang Ji and Liao Qu and Shao-yu Lin and Weichen Yu and Haoran Li and Hao Chen and Jun Shen and Bhiksha Raj and Min Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=GA8TVtxudf}
} | Monocular depth estimation (MDE) is fundamental for deriving 3D scene structures from 2D images. While state-of-the-art monocular relative depth estimation (MRDE) excels in estimating relative depths for in-the-wild images, current monocular metric depth estimation (MMDE) approaches still face challenges in handling unseen scenes. Since MMDE can be viewed as the composition of MRDE and metric scale recovery, we attribute this difficulty to scene dependency, where MMDE models rely on scenes observed during supervised training for predicting scene scales during inference. To address this issue, we propose to use humans as landmarks for distilling scene-independent metric scale priors from generative painting models. Our approach, Metric from Human (MfH), bridges from generalizable MRDE to zero-shot MMDE in a generate-and-estimate manner. Specifically, MfH generates humans on the input image with generative painting and estimates human dimensions with an off-the-shelf human mesh recovery (HMR) model. Based on MRDE predictions, it propagates the metric information from painted humans to the contexts, resulting in metric depth estimations for the original input. Through this annotation-free test-time adaptation, MfH achieves superior zero-shot performance in MMDE, demonstrating its strong generalization ability. | Metric from Human: Zero-shot Monocular Metric Depth Estimation via Test-time Adaptation | [
"Yizhou Zhao",
"Hengwei Bian",
"Kaihua Chen",
"Pengliang Ji",
"Liao Qu",
"Shao-yu Lin",
"Weichen Yu",
"Haoran Li",
"Hao Chen",
"Jun Shen",
"Bhiksha Raj",
"Min Xu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G9OJUgKo4B | @inproceedings{
zhang2024knowledge,
title={Knowledge Composition using Task Vectors with Learned Anisotropic Scaling},
author={Frederic Z. Zhang and Paul Albert and Cristian Rodriguez-Opazo and Anton van den Hengel and Ehsan Abbasnejad},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G9OJUgKo4B}
} | Pre-trained models produce strong generic representations that can be adapted via fine-tuning on specialised datasets. The learned weight difference relative to the pre-trained model, known as a task vector, characterises the direction and stride of fine-tuning that enables the model to capture these specialised representations. The significance of task vectors is such that simple arithmetic operations on them can be used to combine diverse representations from different domains. This paper builds on these properties of task vectors and aims to answer (1) whether components of task vectors, particularly parameter blocks, exhibit similar characteristics, and (2) how such blocks can be used to enhance knowledge composition and transfer. To this end, we introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level. We show that such linear combinations explicitly exploit the low intrinsic dimensionality of pre-trained models, with only a few coefficients being the learnable parameters. Furthermore, composition of parameter blocks enables modular learning that effectively leverages the already learned representations, thereby reducing the dependency on large amounts of data. We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives. In particular, we show that (1) learned anisotropic scaling allows task vectors to be more disentangled, causing less interference in composition; (2) task vector composition excels with scarce or no labelled data and is less prone to domain shift, thus leading to better generalisability; (3) mixing the most informative parameter blocks across different task vectors prior to training can reduce the memory footprint and improve the flexibility of knowledge transfer. Moreover, we show the potential of aTLAS as a parameter-efficient fine-tuning method, particularly with less data, and demonstrate that it can be easily scaled up for higher performance. | Knowledge Composition using Task Vectors with Learned Anisotropic Scaling | [
"Frederic Z. Zhang",
"Paul Albert",
"Cristian Rodriguez-Opazo",
"Anton van den Hengel",
"Ehsan Abbasnejad"
] | NeurIPS.cc/2024/Conference | 2407.02880 | [
"https://github.com/fredzzhang/atlas"
] | https://huggingface.co/papers/2407.02880 | 3 | 11 | 2 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=G99BSV9pt5 | @inproceedings{
barbiero2024relational,
title={Relational Concept Bottleneck Models},
author={Pietro Barbiero and Francesco Giannini and Gabriele Ciravegna and Michelangelo Diligenti and Giuseppe Marra},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G99BSV9pt5}
} | The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept Bottleneck Models (CBMs), are not designed to solve relational problems, while relational deep learning models, such as Graph Neural Networks (GNNs), are not as interpretable as CBMs. To overcome these limitations, we propose Relational Concept Bottleneck Models (R-CBMs), a family of relational deep learning methods providing interpretable task predictions. As special cases, we show that R-CBMs are capable of both representing standard CBMs and message passing GNNs. To evaluate the effectiveness and versatility of these models, we designed a class of experimental problems, ranging from image classification to link prediction in knowledge graphs. In particular we show that R-CBMs (i) match generalization performance of existing relational black-boxes, (ii) support the generation of quantified concept-based explanations, (iii) effectively respond to test-time interventions, and (iv) withstand demanding settings including out-of-distribution scenarios, limited training data regimes, and scarce concept supervisions. | Relational Concept Bottleneck Models | [
"Pietro Barbiero",
"Francesco Giannini",
"Gabriele Ciravegna",
"Michelangelo Diligenti",
"Giuseppe Marra"
] | NeurIPS.cc/2024/Conference | 2308.11991 | [
"https://github.com/diligmic/rcbm-neurips2024"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G8aS48B9bm | @inproceedings{
malinovsky2024byzantine,
title={Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences},
author={Grigory Malinovsky and Peter Richt{\'a}rik and Samuel Horv{\'a}th and Eduard Gorbunov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G8aS48B9bm}
} | Distributed learning has emerged as a leading paradigm for training large machine learning models. However, in real-world scenarios, participants may be unreliable or malicious, posing a significant challenge to the integrity and accuracy of the trained models. Byzantine fault tolerance mechanisms have been proposed to address these issues, but they often assume full participation from all clients, which is not always practical due to the unavailability of some clients or communication constraints. In our work, we propose the first distributed method with client sampling and provable tolerance to Byzantine workers. The key idea behind the developed method is the use of gradient clipping to control stochastic gradient differences in recursive variance reduction. This allows us to bound the potential harm caused by Byzantine workers, even during iterations when all sampled clients are Byzantine. Furthermore, we incorporate communication compression into the method to enhance communication efficiency. Under general assumptions, we prove convergence rates for the proposed method that match the existing state-of-the-art (SOTA) theoretical results. We also propose a heuristic on how to adjust any Byzantine-robust method to a partial participation scenario via clipping. | Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences | [
"Grigory Malinovsky",
"Peter Richtárik",
"Samuel Horváth",
"Eduard Gorbunov"
] | NeurIPS.cc/2024/Conference | 2311.14127 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G89r8Mgi5r | @inproceedings{
chen2024confusionresistant,
title={Confusion-Resistant Federated Learning via Diffusion-Based Data Harmonization on Non-{IID} Data},
author={xiaohong chen and Canran Xiao and Yongmei liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G89r8Mgi5r}
} | Federated learning has become a pivotal distributed learning paradigm, involving collaborative model updates across multiple nodes with private data. However, handling non-i.i.d. (not identically and independently distributed) data and ensuring model consistency across heterogeneous environments present significant challenges. These challenges often lead to model performance degradation and increased difficulty in achieving effective communication among participant models. In this work, we propose Confusion-Resistant Federated Learning via Consistent Diffusion (CRFed), a novel framework designed to address these issues. Our approach introduces a new diffusion-based data harmonization mechanism that includes data augmentation, noise injection, and iterative denoising to ensure consistent model updates across non-i.i.d. data distributions. This mechanism aims to reduce data distribution disparities among participating nodes, enhancing the coordination and consistency of model updates. Moreover, we design a confusion-resistant strategy leveraging an indicator function and adaptive learning rate adjustment to mitigate the adverse effects of data heterogeneity and model inconsistency. Specifically, we calculate importance sampling weights based on the optimal sampling probability, which guides the selection of clients and the sampling of their data, ensuring that model updates are robust and aligned across different nodes. Extensive experiments on benchmark datasets, including MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and NIPD, demonstrate the effectiveness of CRFed in improving accuracy, convergence speed, and overall robustness in federated learning scenarios with severe data heterogeneity. | Confusion-Resistant Federated Learning via Diffusion-Based Data Harmonization on Non-IID Data | [
"xiaohong chen",
"Canran Xiao",
"Yongmei liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.