bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=fAnubdSFpn | @inproceedings{
zhang2024a,
title={A {PID} Controller Approach for Adaptive Probability-dependent Gradient Decay in Model Calibration},
author={Siyuan Zhang and Linbo Xie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=fAnubdSFpn}
} | Modern deep learning models often exhibit overconfident predictions, inadequately capturing uncertainty. During model optimization, the expected calibration error tends to overfit earlier than classification accuracy, indicating distinct optimization objectives for classification error and calibration error. To ensure consistent optimization of both model accuracy and model calibration, we propose a novel method incorporating a probability-dependent gradient decay coefficient into loss function. This coefficient exhibits a strong correlation with the overall confidence level. To maintain model calibration during optimization, we utilize a proportional-integral-derivative (PID) controller to dynamically adjust this gradient decay rate, where the adjustment relies on the proposed relative calibration error feedback in each epoch, thereby preventing the model from exhibiting over-confidence or under-confidence. Within the PID control system framework, the proposed relative calibration error serves as the control system output, providing an indication of the overall confidence level, while the gradient decay rate functions as the controlled variable. Moreover, recognizing the impact of gradient amplitude of adaptive decay rates, we implement an adaptive learning rate mechanism for gradient compensation to prevent inadequate learning of over-small or over-large gradient. Empirical experiments validate the efficacy of our PID-based adaptive gradient decay rate approach, ensuring consistent optimization of model calibration and model accuracy. | A PID Controller Approach for Adaptive Probability-dependent Gradient Decay in Model Calibration | [
"Siyuan Zhang",
"Linbo Xie"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=fAlcxvrOEX | @inproceedings{
blasingame2024adjointdeis,
title={Adjoint{DEIS}: Efficient Gradients for Diffusion Models},
author={Zander W. Blasingame and Chen Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=fAlcxvrOEX}
} | The optimization of the latents and parameters of diffusion models with respect to some differentiable metric defined on the output of the model is a challenging and complex problem. The sampling for diffusion models is done by solving either the *probability flow* ODE or diffusion SDE wherein a neural network approximates the score function allowing a numerical ODE/SDE solver to be used. However, naive backpropagation techniques are memory intensive, requiring the storage of all intermediate states, and face additional complexity in handling the injected noise from the diffusion term of the diffusion SDE. We propose a novel family of bespoke ODE solvers to the continuous adjoint equations for diffusion models, which we call *AdjointDEIS*. We exploit the unique construction of diffusion SDEs to further simplify the formulation of the continuous adjoint equations using *exponential integrators*. Moreover, we provide convergence order guarantees for our bespoke solvers. Significantly, we show that continuous adjoint equations for diffusion SDEs actually simplify to a simple ODE. Lastly, we demonstrate the effectiveness of AdjointDEIS for guided generation with an adversarial attack in the form of the face morphing problem. Our code will be released on our project page [https://zblasingame.github.io/AdjointDEIS/](https://zblasingame.github.io/AdjointDEIS/) | AdjointDEIS: Efficient Gradients for Diffusion Models | [
"Zander W. Blasingame",
"Chen Liu"
] | NeurIPS.cc/2024/Conference | 2405.15020 | [
"https://github.com/zblasingame/adjointdeis"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=fA3RMMl8ii | @inproceedings{
gao2024tactile,
title={Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation},
author={Ruihan Gao and Kangle Deng and Gengshan Yang and Wenzhen Yuan and Jun-Yan Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=fA3RMMl8ii}
} | 3D generation methods have shown visually compelling results powered by diffusion image priors. However, they often fail to produce realistic geometric details, resulting in overly smooth surfaces or geometric details inaccurately baked in albedo maps. To address this, we introduce a new method that incorporates touch as an additional modality to improve the geometric details of generated 3D assets. We design a lightweight 3D texture field to synthesize visual and tactile textures, guided by diffusion-based distribution matching losses on both visual and tactile domains. Our method ensures the consistency between visual and tactile textures while preserving photorealism. We further present a multi-part editing pipeline that enables us to synthesize different textures across various regions. To our knowledge, we are the first to leverage high-resolution tactile sensing to enhance geometric details for 3D generation tasks. We evaluate our method on both text-to-3D and image-to-3D settings. Our experiments demonstrate that our method provides customized and realistic fine geometric textures while maintaining accurate alignment between two modalities of vision and touch. | Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation | [
"Ruihan Gao",
"Kangle Deng",
"Gengshan Yang",
"Wenzhen Yuan",
"Jun-Yan Zhu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=f8MrWxlnRz | @inproceedings{
wang2024adaptive,
title={Adaptive Important Region Selection with Reinforced Hierarchical Search for Dense Object Detection},
author={Dingrong Wang and Hitesh Sapkota and Qi Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=f8MrWxlnRz}
} | Existing state-of-the-art dense object detection techniques tend to produce a large number of false positive detections on difficult images with complex scenes because they focus on ensuring a high recall. To improve the detection accuracy, we propose an Adaptive Important Region Selection (AIRS) framework guided by Evidential Q-learning coupled with a uniquely designed reward function. Inspired by human visual attention, our detection model conducts object search in a top-down, hierarchical fashion. It starts from the top of the hierarchy with the coarsest granularity and then identifies the potential patches likely to contain objects of interest. It then discards non-informative patches and progressively moves downward on the selected ones for a fine-grained search. The proposed evidential Q-learning systematically encodes epistemic uncertainty in its evidential-Q value to encourage the exploration of unknown patches, especially in the early phase of model training. In this way, the proposed model dynamically balances exploration-exploitation to cover both highly valuable and informative patches. Theoretical analysis and extensive experiments on multiple datasets demonstrate that our proposed framework outperforms the SOTA models. | Adaptive Important Region Selection with Reinforced Hierarchical Search for Dense Object Detection | [
"Dingrong Wang",
"Hitesh Sapkota",
"Qi Yu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=f829mkQMUg | @inproceedings{
zheng2024boundary,
title={Boundary Decomposition for Nadir Objective Vector Estimation},
author={Ruihao Zheng and Zhenkun Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=f829mkQMUg}
} | The nadir objective vector plays a key role in solving multi-objective optimization problems (MOPs), where it is often used to normalize the objective space and guide the search. The current methods for estimating the nadir objective vector perform effectively only on specific MOPs. This paper reveals the limitations of these methods: exact methods can only work on discrete MOPs, while heuristic methods cannot deal with the MOP with a complicated feasible objective region. To fill this gap, we propose a general and rigorous method, namely boundary decomposition for nadir objective vector estimation (BDNE). BDNE scalarizes the MOP into a set of boundary subproblems. By utilizing bilevel optimization, boundary subproblems are optimized and adjusted alternately, thereby refining their optimal solutions to align with the nadir objective vector. We prove that the bilevel optimization identifies the nadir objective vector under mild conditions. We compare BDNE with existing methods on various black-box MOPs. The results conform to the theoretical analysis and show the significant potential of BDNE for real-world application. | Boundary Decomposition for Nadir Objective Vector Estimation | [
"Ruihao Zheng",
"Zhenkun Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=f70e6YYFHF | @inproceedings{
kitouni2024the,
title={The Factorization Curse: Which Tokens You Predict Underlie the Reversal Curse and More},
author={Ouail Kitouni and Niklas Nolte and Adina Williams and Michael Rabbat and Diane Bouchacourt and Mark Ibrahim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=f70e6YYFHF}
} | Today's best language models still struggle with "hallucinations", factually incorrect generations, which impede their ability to reliably retrieve information seen during training. The *reversal curse*, where models cannot recall information when probed in a different order than was encountered during training, exemplifies limitations in information retrieval.
To better understand these limitations, we reframe the reversal curse as a *factorization curse* --- a failure of models to learn the same joint distribution under different factorizations.
We more closely simulate finetuning workflows which train pretrained models on specialized knowledge by introducing
*WikiReversal*, a realistic testbed based on Wikipedia knowledge graphs. Through a series of controlled experiments with increasing levels of realism, including non-reciprocal relations, we find that reliable information retrieval is an inherent failure of the next-token prediction objective used in popular large language models. Moreover, we demonstrate reliable information retrieval cannot be solved with scale, reversed tokens, or even naive bidirectional-attention training. Consequently, various approaches to finetuning on specialized data would necessarily provide mixed results on downstream tasks, unless the model has already seen the right sequence of tokens.
Across five tasks of varying levels of complexity, our results uncover a promising path forward: factorization-agnostic objectives can significantly mitigate the reversal curse and hint at improved knowledge storage and planning capabilities. | The Factorization Curse: Which Tokens You Predict Underlie the Reversal Curse and More | [
"Ouail Kitouni",
"Niklas Nolte",
"Adina Williams",
"Michael Rabbat",
"Diane Bouchacourt",
"Mark Ibrahim"
] | NeurIPS.cc/2024/Conference | 2406.05183 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=f63DKIpx0I | @inproceedings{
rauba2024selfhealing,
title={Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments},
author={Paulius Rauba and Nabeel Seedat and Krzysztof Kacprzyk and Mihaela van der Schaar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=f63DKIpx0I}
} | Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process (DGP). Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their *reason-agnostic* nature. By choosing from a pre-defined set of actions, such methods implicitly assume that the causes of model degradation are irrelevant to what actions should be taken, limiting their ability to select appropriate adaptations. In this paper, we propose an alternative paradigm to overcome these limitations, called *self-healing machine learning* (SHML). Contrary to previous approaches, SHML autonomously diagnoses the reason for degradation and proposes diagnosis-based corrective actions. We formalize SHML as an optimization problem over a space of adaptation actions to minimize the expected risk under the shifted DGP. We introduce a theoretical framework for self-healing systems and build an agentic self-healing solution *$\mathcal{H}$-LLM* which uses large language models to perform self-diagnosis by reasoning about the structure underlying the DGP, and self-adaptation by proposing and evaluating corrective actions. Empirically, we analyze different components of *$\mathcal{H}$-LLM* to understand *why* and *when* it works, demonstrating the potential of self-healing ML. | Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments | [
"Paulius Rauba",
"Nabeel Seedat",
"Krzysztof Kacprzyk",
"Mihaela van der Schaar"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=f4v7cmm5sC | @inproceedings{
berghaus2024foundation,
title={Foundation Inference Models for Markov Jump Processes},
author={David Berghaus and Kostadin Cvejoski and Patrick Seifner and Cesar Ojeda and Ramses J Sanchez},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=f4v7cmm5sC}
} | Markov jump processes are continuous-time stochastic processes which describe dynamical systems evolving in discrete state spaces. These processes find wide application in the natural sciences and machine learning, but their inference is known to be far from trivial. In this work we introduce a methodology for *zero-shot inference* of Markov jump processes (MJPs), on bounded state spaces, from noisy and sparse observations, which consists of two components. First, a broad probability distribution over families of MJPs, as well as over possible observation times and noise mechanisms, with which we simulate a synthetic dataset of hidden MJPs and their noisy observations. Second, a neural recognition model that processes subsets of the simulated observations, and that is trained to output the initial condition and rate matrix of the target MJP in a supervised way. We empirically demonstrate that *one and the same* (pretrained) recognition model can infer, *in a zero-shot fashion*, hidden MJPs evolving in state spaces of different dimensionalities. Specifically, we infer MJPs which describe (i) discrete flashing ratchet systems, which are a type of Brownian motors, and the conformational dynamics in (ii) molecular simulations, (iii) experimental ion channel data and (iv) simple protein folding models. What is more, we show that our model performs on par with state-of-the-art models which are trained on the target datasets.
Our pretrained model is available online. | Foundation Inference Models for Markov Jump Processes | [
"David Berghaus",
"Kostadin Cvejoski",
"Patrick Seifner",
"Cesar Ojeda",
"Ramses J Sanchez"
] | NeurIPS.cc/2024/Conference | 2406.06419 | [
""
] | https://huggingface.co/papers/2406.06419 | 0 | 0 | 0 | 5 | [
"cvejoski/FIMMJP"
] | [] | [] | [
"cvejoski/FIMMJP"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=f3oHNyqd83 | @inproceedings{
li2024rethinking,
title={Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis},
author={Honglin Li and Yunlong Zhang and Pingyi Chen and Zhongyi Shui and Chenglu Zhu and Lin Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=f3oHNyqd83}
} | Histopathology Whole Slide Image (WSI) analysis serves as the gold standard for clinical cancer diagnosis in the daily routines of doctors. To develop computer-aided diagnosis model for histopathology WSIs, previous methods typically employ Multi-Instance Learning to enable slide-level prediction given only slide-level labels.
Among these models, vanilla attention mechanisms without pairwise interactions have traditionally been employed but are unable to model contextual information. More recently, self-attention models have been utilized to address this issue. To alleviate the computational complexity of long sequences in large WSIs, methods like HIPT use region-slicing, and TransMIL employs Nystr\"{o}mformer as an approximation of full self-attention. Both approaches suffer from suboptimal performance due to the loss of key information. Moreover, their use of absolute positional embedding struggles to effectively handle long contextual dependencies in shape-varying WSIs.
In this paper, we first analyze how the low-rank nature of the long-sequence attention matrix constrains the representation ability of WSI modelling. Then, we demonstrate that the rank of attention matrix can be improved by focusing on local interactions via a local attention mask. Our analysis shows that the local mask aligns with the attention patterns in the lower layers of the Transformer. Furthermore, the local attention mask can be implemented during chunked attention calculation, reducing the quadratic computational complexity to linear with a small local bandwidth. Additionally, this locality helps the model generalize to unseen or under-fitted positions more easily.
Building on this, we propose a local-global hybrid Transformer for both computational acceleration and local-global information interactions modelling. Our method, Long-contextual MIL (LongMIL), is evaluated through extensive experiments on various WSI tasks to validate its superiority in: 1) overall performance, 2) memory usage and speed, and 3) extrapolation ability compared to previous methods. | Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis | [
"Honglin Li",
"Yunlong Zhang",
"Pingyi Chen",
"Zhongyi Shui",
"Chenglu Zhu",
"Lin Yang"
] | NeurIPS.cc/2024/Conference | 2410.14195 | [
"https://github.com/invoker-ll/long-mil"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ez7w0Ss4g9 | @inproceedings{
littwin2024how,
title={How {JEPA} Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks},
author={Etai Littwin and Omid Saremi and Madhu Advani and Vimal Thilak and Preetum Nakkiran and Chen Huang and Joshua M. Susskind},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ez7w0Ss4g9}
} | Two competing paradigms exist for self-supervised learning of data representations.
Joint Embedding Predictive Architectures (JEPAs) is a class of architectures in which semantically similar inputs are encoded into representations that are predictive of each other. A recent successful approach that falls under the JEPA framework is self-distillation, where an online encoder is trained to predict the output of the target encoder, sometimes with a lightweight predictor network. This is contrasted with the Masked Auto Encoder (MAE) paradigm, where an encoder and decoder are trained to reconstruct missing parts of the input in ambient space rather than its latent representation. A common motivation for using the JEPA approach over MAE is that the JEPA objective prioritizes abstract features over fine-grained pixel information (which can be unpredictable and uninformative).
In this work, we seek to understand the mechanism behind this empirical observation by analyzing deep linear models. We uncover a surprising mechanism: in a simplified linear setting where both approaches learn similar representations, JEPAs are biased to learn high influence features, or features characterized by having high regression coefficients. Our results point to a distinct implicit bias of predicting in latent space that may shed light on its success in practice. | How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks | [
"Etai Littwin",
"Omid Saremi",
"Madhu Advani",
"Vimal Thilak",
"Preetum Nakkiran",
"Chen Huang",
"Joshua M. Susskind"
] | NeurIPS.cc/2024/Conference | 2407.03475 | [
""
] | https://huggingface.co/papers/2407.03475 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=eygv0JRvTL | @inproceedings{
ziomek2024bayesian,
title={Bayesian Optimisation with Unknown Hyperparameters: Regret Bounds Logarithmically Closer to Optimal},
author={Juliusz Ziomek and Masaki Adachi and Michael A Osborne},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eygv0JRvTL}
} | Bayesian Optimization (BO) is widely used for optimising black-box functions but requires us to specify the length scale hyperparameter, which defines the smoothness of the functions the optimizer will consider. Most current BO algorithms choose this hyperparameter by maximizing the marginal likelihood of the observed data, albeit risking misspecification if the objective function is less smooth in regions we have not yet explored. The only prior solution addressing this problem with theoretical guarantees was A-GP-UCB, proposed by Berkenkamp et al. (2019). This algorithm progressively decreases the length scale, expanding the class of functions considered by the optimizer. However, A-GP-UCB lacks a stopping mechanism, leading to over-exploration and slow convergence. To overcome this, we introduce Length scale Balancing (LB) - a novel approach, aggregating multiple base surrogate models with varying length scales. LB intermittently adds smaller length scale candidate values while retaining longer scales, balancing exploration and exploitation. We formally derive a cumulative regret bound of LB and compare it with the regret of an oracle BO algorithm using the optimal length scale. Denoting the factor by which the regret bound of A-GP-UCB was away from oracle as $g(T)$, we show that LB is only $\log g(T)$ away from oracle regret. We also empirically evaluate our algorithm on synthetic and real-world benchmarks and show it outperforms A-GP-UCB and maximum likelihood estimation. | Bayesian Optimisation with Unknown Hyperparameters: Regret Bounds Logarithmically Closer to Optimal | [
"Juliusz Ziomek",
"Masaki Adachi",
"Michael A Osborne"
] | NeurIPS.cc/2024/Conference | 2410.10384 | [
"https://github.com/juliuszziomek/lb-gp-ucb"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eyfYC19gOd | @inproceedings{
xu2024gridd,
title={Grid4D: 4D Decomposed Hash Encoding for High-fidelity Dynamic Gaussian Splatting},
author={Jiawei Xu and Zexin Fan and Jian Yang and Jin Xie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eyfYC19gOd}
} | Recently, Gaussian splatting has received more and more attention in the field of static scene rendering. Due to the low computational overhead and inherent flexibility of explicit representations, plane-based explicit methods are popular ways to predict deformations for Gaussian-based dynamic scene rendering models. However, plane-based methods rely on the inappropriate low-rank assumption and excessively decompose the space-time 4D encoding, resulting in overmuch feature overlap and unsatisfactory rendering quality. To tackle these problems, we propose Grid4D, a dynamic scene rendering model based on Gaussian splatting and employing a novel explicit encoding method for the 4D input through the hash encoding. Different from plane-based explicit representations, we decompose the 4D encoding into one spatial and three temporal 3D hash encodings without the low-rank assumption. Additionally, we design a novel attention module that generates the attention scores in a directional range to aggregate the spatial and temporal features. The directional attention enables Grid4D to more accurately fit the diverse deformations across distinct scene components based on the spatial encoded features. Moreover, to mitigate the inherent lack of smoothness in explicit representation methods, we introduce a smooth regularization term that keeps our model from the chaos of deformation prediction. Our experiments demonstrate that Grid4D significantly outperforms the state-of-the-art models in visual quality and rendering speed. | Grid4D: 4D Decomposed Hash Encoding for High-fidelity Dynamic Gaussian Splatting | [
"Jiawei Xu",
"Zexin Fan",
"Jian Yang",
"Jin Xie"
] | NeurIPS.cc/2024/Conference | 2410.20815 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=exATQD4HSv | @inproceedings{
volkmann2024a,
title={A scalable generative model for dynamical system reconstruction from neuroimaging data},
author={Eric Volkmann and Alena Br{\"a}ndle and Daniel Durstewitz and Georgia Koppe},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=exATQD4HSv}
} | Data-driven inference of the generative dynamics underlying a set of observed time series is of growing interest in machine learning and the natural sciences. In neuroscience, such methods promise to alleviate the need to handcraft models based on biophysical principles and allow to automatize the inference of inter-individual differences in brain dynamics.
Recent breakthroughs in training techniques for state space models (SSMs) specifically geared toward dynamical systems (DS) reconstruction (DSR) enable to recover the underlying system including its geometrical (attractor) and long-term statistical invariants from even short time series. These techniques are based on control-theoretic ideas, like modern variants of teacher forcing (TF), to ensure stable loss gradient propagation while training.
However, as it currently stands, these techniques are not directly applicable to data modalities where current observations depend on an entire history of previous states due to a signal’s filtering properties, as common in neuroscience (and physiology more generally).
Prominent examples are the blood oxygenation level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) or Ca$^{2+}$ imaging data.
Such types of signals render the SSM's decoder model non-invertible, a requirement for previous TF-based methods.
Here, exploiting the recent success of control techniques for training SSMs, we propose a novel algorithm that solves this problem and scales exceptionally well with model dimensionality and filter length. We demonstrate its efficiency in reconstructing dynamical systems, including their state space geometry and long-term temporal properties, from just short BOLD time series. | A scalable generative model for dynamical system reconstruction from neuroimaging data | [
"Eric Volkmann",
"Alena Brändle",
"Daniel Durstewitz",
"Georgia Koppe"
] | NeurIPS.cc/2024/Conference | 2411.02949 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=evP9mxNNxJ | @inproceedings{
chen2024are,
title={Are We on the Right Way for Evaluating Large Vision-Language Models?},
author={Lin Chen and Jinsong Li and Xiaoyi Dong and Pan Zhang and Yuhang Zang and Zehui Chen and Haodong Duan and Jiaqi Wang and Yu Qiao and Dahua Lin and Feng Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=evP9mxNNxJ}
} | Large vision-language models (LVLMs) have recently achieved rapid progress, sparking numerous studies to evaluate their multi-modal capabilities. However, we dig into current evaluation works and identify two primary issues: 1) Visual content is unnecessary for many samples. The answers can be directly inferred from the questions and options, or the world knowledge embedded in LLMs. This phenomenon is prevalent across current benchmarks. For instance, GeminiPro achieves 42.7% on the MMMU benchmark without any visual input, and outperforms the random choice baseline across six benchmarks near 24% on average. 2) Unintentional data leakage exists in LLM and LVLM training. LLM and LVLM could still answer some visual-necessary questions without visual content, indicating the memorizing of these samples within large-scale training data. For example, Sphinx-X-MoE gets 43.6% on MMMU without accessing images, surpassing its LLM backbone with 17.9%. Both problems lead to misjudgments of actual multi-modal gains and potentially misguide the study of LVLM. To this end, we present MMStar, an elite vision-indispensable multi-modal benchmark comprising 1,500 samples meticulously selected by humans. MMStar benchmarks 6 core capabilities and 18 detailed axes, aiming to evaluate LVLMs' multi-modal capacities with carefully balanced and purified samples. These samples are first roughly selected from current benchmarks with an automated pipeline, human review is then involved to ensure each curated sample exhibits visual dependency, minimal data leakage, and requires advanced multi-modal capabilities. Moreover, two metrics are developed to measure data leakage and actual performance gain in multi-modal training. We evaluate 16 leading LVLMs on MMStar to assess their multi-modal capabilities, and on 7 benchmarks with the proposed metrics to investigate their data leakage and actual multi-modal gain. | Are We on the Right Way for Evaluating Large Vision-Language Models? | [
"Lin Chen",
"Jinsong Li",
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Zang",
"Zehui Chen",
"Haodong Duan",
"Jiaqi Wang",
"Yu Qiao",
"Dahua Lin",
"Feng Zhao"
] | NeurIPS.cc/2024/Conference | 2403.20330 | [
"https://github.com/MMStar-Benchmark/MMStar"
] | https://huggingface.co/papers/2403.20330 | 5 | 6 | 0 | 11 | [] | [
"Lin-Chen/MMStar"
] | [] | [] | [
"Lin-Chen/MMStar"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=euQ0C4iS7O | @inproceedings{
yang2024leveraging,
title={Leveraging Drift to Improve Sample Complexity of Variance Exploding Diffusion Models},
author={Ruofeng Yang and Zhijie Wang and Bo Jiang and Shuai Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=euQ0C4iS7O}
} | Variance exploding (VE) based diffusion models, an important class of diffusion models, have shown state-of-the-art (SOTA) performance. However, only a few theoretical works analyze VE-based models, and those works suffer from a worse forward convergence rate $1/\text{poly}(T)$ than the $\exp{(-T)}$ of variance preserving (VP) based models, where $T$ is the forward diffusion time and the rate measures the distance between forward marginal distribution $q_T$ and pure Gaussian noise. The slow rate is due to the Brownian Motion without a drift term. In this work, we design a new drifted VESDE forward process, which allows a faster $\exp{(-T)}$ forward convergence rate. With this process, we achieve the first efficient polynomial sample complexity for a series of VE-based models with reverse SDE under the manifold hypothesis. Furthermore, unlike previous works, we allow the diffusion coefficient to be unbounded instead of a constant, which is closer to the SOTA models. Besides the reverse SDE, the other common reverse process is the probability flow ODE (PFODE) process, which is deterministic and enjoys faster sample speed. To deepen the understanding of VE-based models, we consider a more general setting considering reverse SDE and PFODE simultaneously, propose a unified tangent-based analysis framework, and prove the first quantitative convergence guarantee for SOTA VE-based models with reverse PFODE.
We also show that the drifted VESDE can balance different error terms and improve generated samples without training through synthetic and real-world experiments. | Leveraging Drift to Improve Sample Complexity of Variance Exploding Diffusion Models | [
"Ruofeng Yang",
"Zhijie Wang",
"Bo Jiang",
"Shuai Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=etPAH4xSUn | @inproceedings{
gupta2024incontext,
title={In-Context Symmetries: Self-Supervised Learning through Contextual World Models},
author={Sharut Gupta and Chenyu Wang and Yifei Wang and Tommi Jaakkola and Stefanie Jegelka},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=etPAH4xSUn}
} | At the core of self-supervised learning for vision is the idea of learning invariant or equivariant representations with respect to a set of data transformations. This approach, however, introduces strong inductive biases, which can render the representations fragile in downstream tasks that do not conform to these symmetries. In this work, drawing insights from world models, we propose to instead learn a general representation that can adapt to be invariant or equivariant to different transformations by paying attention to context --- a memory module that tracks task-specific states, actions and future states. Here, the action is the transformation, while the current and future states respectively represent the input's representation before and after the transformation. Our proposed algorithm, Contextual Self Supervised Learning (ContextSSL), learns equivariance to all transformations (as opposed to invariance). In this way, the model can learn to encode all relevant features as general representations while having the versatility to tail down to task-wise symmetries when given a few examples as the context. Empirically, we demonstrate significant performance gains over existing methods on equivariance-related tasks, supported by both qualitative and quantitative evaluations. | In-Context Symmetries: Self-Supervised Learning through Contextual World Models | [
"Sharut Gupta",
"Chenyu Wang",
"Yifei Wang",
"Tommi Jaakkola",
"Stefanie Jegelka"
] | NeurIPS.cc/2024/Conference | 2405.18193 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=esVleaqkRc | @inproceedings{
dumouchelle2024neurbilo,
title={Neur2Bi{LO}: Neural Bilevel Optimization},
author={Justin Dumouchelle and Esther Julien and Jannis Kurtz and Elias Boutros Khalil},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=esVleaqkRc}
} | Bilevel optimization deals with nested problems in which *leader* takes the first decision to minimize their objective function while accounting for a *follower*'s best-response reaction. Constrained bilevel problems with integer variables are particularly notorious for their hardness. While exact solvers have been proposed for mixed-integer *linear* bilevel optimization, they tend to scale poorly with problem size and are hard to generalize to the non-linear case. On the other hand, problem-specific algorithms (exact and heuristic) are limited in scope. Under a data-driven setting in which similar instances of a bilevel problem are solved routinely, our proposed framework, Neur2BiLO, embeds a neural network approximation of the leader's or follower's value function, trained via supervised regression, into an easy-to-solve mixed-integer program. Neur2BiLO serves as a heuristic that produces high-quality solutions extremely fast for four applications with linear and non-linear objectives and pure and mixed-integer variables. | Neur2BiLO: Neural Bilevel Optimization | [
"Justin Dumouchelle",
"Esther Julien",
"Jannis Kurtz",
"Elias Boutros Khalil"
] | NeurIPS.cc/2024/Conference | 2402.02552 | [
"https://github.com/khalil-research/neur2bilo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=esTPCUJZhe | @inproceedings{
elenter2024overcoming,
title={Overcoming Brittleness in Pareto-Optimal Learning Augmented Algorithms},
author={Alex Elenter and Spyros Angelopoulos and Christoph D{\"u}rr and Yanni LEFKI},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=esTPCUJZhe}
} | The study of online algorithms with machine-learned predictions has gained considerable prominence in recent years. One of the common objectives in the design and analysis of such algorithms is to attain (Pareto) optimal tradeoffs between the {\em consistency} of the algorithm, i.e., its performance assuming perfect predictions, and its {\em robustness}, i.e., the performance of the algorithm under adversarial predictions. In this work, we demonstrate that this optimization criterion can be extremely brittle, in that the performance of Pareto-optimal algorithms may degrade dramatically even in the presence of imperceptive prediction error. To remedy this drawback, we propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a {\em user-specified profile}. This allows us to regulate the performance of the algorithm as a function of the prediction error, while simultaneously
maintaining the analytical notion of consistency/robustness tradeoffs, adapted to the profile setting. We apply this new approach to a well-studied online problem, namely the {\em one-way trading} problem. For this problem, we further address another limitation of the state-of-the-art Pareto-optimal algorithms, namely the fact that they are tailored to worst-case, and extremely pessimistic inputs. We propose a new Pareto-optimal algorithm that leverages any deviation from the worst-case input to its benefit, and introduce a new metric that allows us to compare any two Pareto-optimal algorithms via a {\em dominance} relation. | Overcoming Brittleness in Pareto-Optimal Learning Augmented Algorithms | [
"Alex Elenter",
"Spyros Angelopoulos",
"Christoph Dürr",
"Yanni LEFKI"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=esDvZi2Cf3 | @inproceedings{
liu2024a,
title={A Simple Image Segmentation Framework via In-Context Examples},
author={Yang Liu and Chenchen Jing and Hengtao Li and Muzhi Zhu and Hao Chen and Xinlong Wang and Chunhua Shen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=esDvZi2Cf3}
} | Recently, there have been explorations of generalist segmentation models that can effectively tackle a variety of image segmentation tasks within a unified in-context learning framework. However, these methods still struggle with task ambiguity in in-context segmentation, as not all in-context examples can accurately convey the task information. In order to address this issue, we present SINE, a simple image $\textbf{S}$egmentation framework utilizing $\textbf{in}$-context $\textbf{e}$xamples. Our approach leverages a Transformer encoder-decoder structure, where the encoder provides high-quality image representations, and the decoder is designed to yield multiple task-specific output masks to eliminate task ambiguity effectively. Specifically, we introduce an In-context Interaction module to complement in-context information and produce correlations between the target image and the in-context example and a Matching Transformer that uses fixed matching and a Hungarian algorithm to eliminate differences between different tasks. In addition, we have further perfected the current evaluation system for in-context image segmentation, aiming to facilitate a holistic appraisal of these models. Experiments on various segmentation tasks show the effectiveness of the proposed method. | A Simple Image Segmentation Framework via In-Context Examples | [
"Yang Liu",
"Chenchen Jing",
"Hengtao Li",
"Muzhi Zhu",
"Hao Chen",
"Xinlong Wang",
"Chunhua Shen"
] | NeurIPS.cc/2024/Conference | 2410.04842 | [
"https://github.com/aim-uofa/sine"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=erwatqQ4p8 | @inproceedings{
le2024mixture,
title={Mixture of Experts Meets Prompt-Based Continual Learning},
author={Minh Le and An Nguyen The and Huy Nguyen and Thien Trang Nguyen Vu and Huyen Trang Pham and Linh Van Ngo and Nhat Ho},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=erwatqQ4p8}
} | Exploiting the power of pre-trained models, prompt-based approaches stand out compared to other continual learning solutions in effectively preventing catastrophic forgetting, even with very few learnable parameters and without the need for a memory buffer. While existing prompt-based continual learning methods excel in leveraging prompts for state-of-the-art performance, they often lack a theoretical explanation for the effectiveness of prompting. This paper conducts a theoretical analysis to unravel how prompts bestow such advantages in continual learning, thus offering a new perspective on prompt design. We first show that the attention block of pre-trained models like Vision Transformers inherently encodes a special mixture of experts architecture, characterized by linear experts and quadratic gating score functions. This realization drives us to provide a novel view on prefix tuning, reframing it as the addition of new task-specific experts, thereby inspiring the design of a novel gating mechanism termed Non-linear Residual Gates (NoRGa). Through the incorporation of non-linear activation and residual connection, NoRGa enhances continual learning performance while preserving parameter efficiency. The effectiveness of NoRGa is substantiated both theoretically and empirically across diverse benchmarks and pretraining paradigms. | Mixture of Experts Meets Prompt-Based Continual Learning | [
"Minh Le",
"An Nguyen The",
"Huy Nguyen",
"Thien Trang Nguyen Vu",
"Huyen Trang Pham",
"Linh Van Ngo",
"Nhat Ho"
] | NeurIPS.cc/2024/Conference | 2405.14124 | [
"https://github.com/minhchuyentoancbn/moe_promptcl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=erjQDJ0z9L | @inproceedings{
lu2024discovering,
title={Discovering Preference Optimization Algorithms with and for Large Language Models},
author={Chris Lu and Samuel Holt and Claudio Fanconi and Alex James Chan and Jakob Nicolaus Foerster and Mihaela van der Schaar and Robert Tjarko Lange},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=erjQDJ0z9L}
} | Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
Typically, preference optimization is approached as an offline supervised learning task using manually crafted convex loss functions. While these methods are based on theoretical insights, they are inherently constrained by human creativity, so the large search space of possible loss functions remains under-explored. We address this by performing LLM-driven *objective discovery* to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention. Specifically, we iteratively prompt an LLM to propose and implement new preference optimization loss functions based on previously evaluated performance metrics. This process leads to the discovery of previously unknown and performant preference optimization algorithms. The best performing of these we call *Discovered Preference Optimization* (DiscoPOP), a novel algorithm that adaptively blends logistic and exponential losses. Experiments demonstrate the state-of-the-art performance of DiscoPOP and its successful transfer to held-out tasks. | Discovering Preference Optimization Algorithms with and for Large Language Models | [
"Chris Lu",
"Samuel Holt",
"Claudio Fanconi",
"Alex James Chan",
"Jakob Nicolaus Foerster",
"Mihaela van der Schaar",
"Robert Tjarko Lange"
] | NeurIPS.cc/2024/Conference | 2406.08414 | [
"https://github.com/samholt/DiscoPOP"
] | https://huggingface.co/papers/2406.08414 | 4 | 13 | 0 | 7 | [
"SakanaAI/DiscoPOP-zephyr-7b-gemma",
"QuantFactory/DiscoPOP-zephyr-7b-gemma-GGUF"
] | [] | [
"eduagarcia/open_pt_llm_leaderboard"
] | [
"SakanaAI/DiscoPOP-zephyr-7b-gemma",
"QuantFactory/DiscoPOP-zephyr-7b-gemma-GGUF"
] | [] | [
"eduagarcia/open_pt_llm_leaderboard"
] | 1 | poster |
null | https://openreview.net/forum?id=erQDc72vyi | @inproceedings{
fu2024frozendetr,
title={Frozen-{DETR}: Enhancing {DETR} with Image Understanding from Frozen Foundation Models},
author={Shenghao Fu and Junkai Yan and Qize Yang and Xihan Wei and Xiaohua Xie and Wei-Shi Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=erQDc72vyi}
} | Recent vision foundation models can extract universal representations and show impressive abilities in various tasks. However, their application on object detection is largely overlooked, especially without fine-tuning them. In this work, we show that frozen foundation models can be a versatile feature enhancer, even though they are not pre-trained for object detection. Specifically, we explore directly transferring the high-level image understanding of foundation models to detectors in the following two ways. First, the class token in foundation models provides an in-depth understanding of the complex scene, which facilitates decoding object queries in the detector's decoder by providing a compact context. Additionally, the patch tokens in foundation models can enrich the features in the detector's encoder by providing semantic details. Utilizing frozen foundation models as plug-and-play modules rather than the commonly used backbone can significantly enhance the detector's performance while preventing the problems caused by the architecture discrepancy between the detector's backbone and the foundation model. With such a novel paradigm, we boost the SOTA query-based detector DINO from 49.0% AP to 51.9% AP (+2.9% AP) and further to 53.8% AP (+4.8% AP) by integrating one or two foundation models respectively, on the COCO validation set after training for 12 epochs with R50 as the detector's backbone. Code will be available. | Frozen-DETR: Enhancing DETR with Image Understanding from Frozen Foundation Models | [
"Shenghao Fu",
"Junkai Yan",
"Qize Yang",
"Xihan Wei",
"Xiaohua Xie",
"Wei-Shi Zheng"
] | NeurIPS.cc/2024/Conference | 2410.19635 | [
""
] | https://huggingface.co/papers/2410.19635 | 0 | 0 | 0 | 6 | [
"fushh7/Frozen-DETR"
] | [] | [] | [
"fushh7/Frozen-DETR"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=eqMNwXvOqn | @inproceedings{
guo2024mkgl,
title={{MKGL}: Mastery of a Three-Word Language},
author={Lingbing Guo and Zhongpu Bo and Zhuo Chen and Yichi Zhang and Jiaoyan Chen and Lan Yarong and Mengshu Sun and Zhiqiang Zhang and Yangyifei Luo and Qian Li and Qiang Zhang and Wen Zhang and Huajun Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eqMNwXvOqn}
} | Large language models (LLMs) have significantly advanced performance across a spectrum of natural language processing (NLP) tasks. Yet, their application to knowledge graphs (KGs), which describe facts in the form of triplets and allow minimal hallucinations, remains an underexplored frontier. In this paper, we investigate the integration of LLMs with KGs by introducing a specialized KG Language (KGL), where a sentence precisely consists of an entity noun, a relation verb, and ends with another entity noun. Despite KGL's unfamiliar vocabulary to the LLM, we facilitate its learning through a tailored dictionary and illustrative sentences, and enhance context understanding via real-time KG context retrieval and KGL token embedding augmentation. Our results reveal that LLMs can achieve fluency in KGL, drastically reducing errors compared to conventional KG embedding methods on KG completion. Furthermore, our enhanced LLM shows exceptional competence in generating accurate three-word sentences from an initial entity and interpreting new unseen terms out of KGs. | MKGL: Mastery of a Three-Word Language | [
"Lingbing Guo",
"Zhongpu Bo",
"Zhuo Chen",
"Yichi Zhang",
"Jiaoyan Chen",
"Lan Yarong",
"Mengshu Sun",
"Zhiqiang Zhang",
"Yangyifei Luo",
"Qian Li",
"Qiang Zhang",
"Wen Zhang",
"Huajun Chen"
] | NeurIPS.cc/2024/Conference | 2410.07526 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=eowkjKVPoH | @inproceedings{
su2024mission,
title={Mission Impossible: A Statistical Perspective on Jailbreaking {LLM}s},
author={Jingtong Su and Julia Kempe and Karen Ullrich},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eowkjKVPoH}
} | Large language models (LLMs) are trained on a deluge of text data with limited quality control. As a result, LLMs can exhibit unintended or even harmful behaviours, such as leaking information, fake news or hate speech. Countermeasures, commonly referred to as preference alignment, include fine-tuning the pretrained LLMs with carefully crafted text examples of desired behaviour. Even then, empirical evidence shows preference aligned LLMs can be enticed to harmful behaviour. This so called jailbreaking of LLMs is typically achieved by adversarially modifying the input prompt to the LLM. Our paper provides theoretical insights into the phenomenon of preference alignment and jailbreaking from a statistical perspective. Under our framework, we first show that pretrained LLMs will mimic harmful behaviour if present in the training corpus. \textbf{Under that same framework, we then introduce a statistical notion of alignment, and lower-bound the jailbreaking probability, showing that it is unpreventable under reasonable assumptions.} Based on our insights, we propose an alteration to the currently prevalent alignment strategy RLHF. Specifically, we introduce a simple modification to the RLHF objective, we call \emph{E-RLHF}, that aims to increase the likelihood of safe responses. \emph{E-RLHF} brings no additional training cost, and is compatible with other methods. Empirically, we demonstrate that \emph{E-RLHF} outperforms RLHF on all alignment problems put forward by the AdvBench \citep{zou2023universal} and HarmBench project \citep{mazeika2024harmbench} without sacrificing model performance as measured by the MT-Bench project \citep{zheng2024judging}. | Mission Impossible: A Statistical Perspective on Jailbreaking LLMs | [
"Jingtong Su",
"Julia Kempe",
"Karen Ullrich"
] | NeurIPS.cc/2024/Conference | 2408.01420 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=enlxHLwwFf | @inproceedings{
petrulionyt{\.{e}}2024functional,
title={Functional Bilevel Optimization for Machine Learning},
author={Ieva Petrulionyt{\.{e}} and Julien Mairal and Michael Arbel},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=enlxHLwwFf}
} | In this paper, we introduce a new functional point of view on bilevel optimization problems for machine learning, where the inner objective is minimized over a function space. These types of problems are most often solved by using methods developed in the parametric setting, where the inner objective is strongly convex with respect to the parameters of the prediction function. The functional point of view does not rely on this assumption and notably allows using over-parameterized neural networks as the inner prediction function. We propose scalable and efficient algorithms for the functional bilevel optimization problem and illustrate the benefits of our approach on instrumental regression and reinforcement learning tasks. | Functional Bilevel Optimization for Machine Learning | [
"Ieva Petrulionytė",
"Julien Mairal",
"Michael Arbel"
] | NeurIPS.cc/2024/Conference | 2403.20233 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=ektPEcqGLb | @inproceedings{
vafaii2024poisson,
title={Poisson Variational Autoencoder},
author={Hadi Vafaii and Dekel Galor and Jacob L. Yates},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ektPEcqGLb}
} | Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) and dorsal (Vafaii et al., 2023) pathways. Despite their success, traditional VAEs rely on continuous latent variables, which significantly deviates from the discrete nature of biological neurons. Here, we developed the Poisson VAE (P-VAE), a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts. Combining Poisson-distributed latent variables with predictive coding introduces a metabolic cost term in the model loss function, suggesting a relationship with sparse coding which we verify empirically. Additionally, we analyze the geometry of learned representations, contrasting the P-VAE to alternative VAE models. We find that the P-VAE encodes its inputs in relatively higher dimensions, facilitating linear separability of categories in a downstream classification task with a much better (5x) sample efficiency. Our work provides an interpretable computational framework to study brain-like sensory processing and paves the way for a deeper understanding of perception as an inferential process. | Poisson Variational Autoencoder | [
"Hadi Vafaii",
"Dekel Galor",
"Jacob L. Yates"
] | NeurIPS.cc/2024/Conference | 2405.14473 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=ekK26cW5TB | @inproceedings{
han2024aucseg,
title={{AUCS}eg: {AUC}-oriented Pixel-level Long-tail Semantic Segmentation},
author={Boyu Han and Qianqian Xu and Zhiyong Yang and Shilong Bao and Peisong Wen and Yangbangyan Jiang and Qingming Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ekK26cW5TB}
} | The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level long-tail learning problems. In the past two decades, many AUC optimization methods have been proposed to improve model performance under long-tail distributions. In this paper, we explore AUC optimization methods in the context of pixel-level long-tail semantic segmentation, a much more complicated scenario. This task introduces two major challenges for AUC optimization techniques. On one hand, AUC optimization in a pixel-level task involves complex coupling across loss terms, with structured inner-image and pairwise inter-image dependencies, complicating theoretical analysis. On the other hand, we find that mini-batch estimation of AUC loss in this case requires a larger batch size, resulting in an unaffordable space complexity. To address these issues, we develop a pixel-level AUC loss function and conduct a dependency-graph-based theoretical analysis of the algorithm's generalization ability. Additionally, we design a Tail-Classes Memory Bank (T-Memory Bank) to manage the significant memory demand. Finally, comprehensive experiments across various benchmarks confirm the effectiveness of our proposed AUCSeg method. The code is available at https://github.com/boyuh/AUCSeg. | AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation | [
"Boyu Han",
"Qianqian Xu",
"Zhiyong Yang",
"Shilong Bao",
"Peisong Wen",
"Yangbangyan Jiang",
"Qingming Huang"
] | NeurIPS.cc/2024/Conference | 2409.20398 | [
"https://github.com/boyuh/aucseg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ejWvCpLuwu | @inproceedings{
zhang2024regexplainer,
title={RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks},
author={Jiaxing Zhang and Zhuomin Chen and hao mei and Longchao Da and Dongsheng Luo and Hua Wei},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ejWvCpLuwu}
} | Graph regression is a fundamental task that has gained significant attention in
various graph learning tasks. However, the inference process is often not easily
interpretable. Current explanation techniques are limited to understanding Graph
Neural Network (GNN) behaviors in classification tasks, leaving an explanation gap
for graph regression models. In this work, we propose a novel explanation method
to interpret the graph regression models (XAIG-R). Our method addresses the
distribution shifting problem and continuously ordered decision boundary issues
that hinder existing methods away from being applied in regression tasks. We
introduce a novel objective based on the graph information bottleneck theory (GIB)
and a new mix-up framework, which can support various GNNs and explainers
in a model-agnostic manner. Additionally, we present a self-supervised learning
strategy to tackle the continuously ordered labels in regression tasks. We evaluate
our proposed method on three benchmark datasets and a real-life dataset introduced
by us, and extensive experiments demonstrate its effectiveness in interpreting GNN
models in regression tasks. | RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks | [
"Jiaxing Zhang",
"Zhuomin Chen",
"hao mei",
"Longchao Da",
"Dongsheng Luo",
"Hua Wei"
] | NeurIPS.cc/2024/Conference | 2307.07840 | [
"https://github.com/jz48/regexplainer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ejIzdt50ek | @inproceedings{
li2024stochastic,
title={Stochastic Optimization Schemes for Performative Prediction with Nonconvex Loss},
author={Qiang LI and Hoi To Wai},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ejIzdt50ek}
} | This paper studies a risk minimization problem with decision dependent data distribution. The problem pertains to the performative prediction setting in which a trained model can affect the outcome estimated by the model. Such dependency creates a feedback loop that influences the stability of optimization algorithms such as stochastic gradient descent (SGD). We present the first study on performative prediction with smooth but possibly non-convex loss. We analyze a greedy deployment scheme with SGD (SGD-GD). Note that in the literature, SGD-GD is often studied with strongly convex loss. We first propose the definition of stationary performative stable (SPS) solutions through relaxing the popular performative stable condition. We then prove that SGD-GD converges to a biased SPS solution in expectation. We consider two conditions of sensitivity on the distribution shifts: (i) the sensitivity is characterized by Wasserstein-1 distance and the loss is Lipschitz w.r.t. data samples, or (ii) the sensitivity is characterized by total variation (TV) divergence and the loss is bounded. In both conditions, the bias levels are proportional to the stochastic gradient's variance and sensitivity level.
Our analysis is extended to a lazy deployment scheme where models are deployed once per several SGD updates, and we show that it converges to a bias-free SPS solution. Numerical experiments corroborate our theories. | Stochastic Optimization Schemes for Performative Prediction with Nonconvex Loss | [
"Qiang LI",
"Hoi To Wai"
] | NeurIPS.cc/2024/Conference | 2405.17922 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ehfCxpDsrw | @inproceedings{
deng2024linnet,
title={LinNet: Linear Network for Efficient Point Cloud Representation Learning},
author={Hao Deng and Kunlei Jing and Shengmei Chen and Cheng Liu and Jiawei Ru and Bo Jiang and Lin Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ehfCxpDsrw}
} | Point-based methods have made significant progress, but improving their scalability in large-scale 3D scenes is still a challenging problem. In this paper, we delve into the point-based method and develop a simpler, faster, stronger variant model, dubbed as LinNet. In particular, we first propose the disassembled set abstraction (DSA) module, which is more effective than the previous version of set abstraction. It achieves more efficient local aggregation by leveraging spatial anisotropy and channel anisotropy separately. Additionally, by mapping 3D point clouds onto 1D space-filling curves, we enable parallelization of downsampling and neighborhood queries on GPUs with linear complexity.
LinNet, as a purely point-based method, outperforms most previous methods in both indoor and outdoor scenes without any extra attention, and sparse convolution but merely relying on a simple MLP. It achieves the mIoU of 73.7\%, 81.4\%, and 69.1\% on the S3DIS Area5, NuScenes, and SemanticKITTI validation benchmarks, respectively, while speeding up almost 10x times over PointNeXt. Our work further reveals both the efficacy and efficiency potential of the vanilla point-based models in large-scale representation learning. Our code will be available upon publication. | LinNet: Linear Network for Efficient Point Cloud Representation Learning | [
"Hao Deng",
"Kunlei Jing",
"Shengmei Chen",
"Cheng Liu",
"Jiawei Ru",
"Bo Jiang",
"Lin Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eezCLKwx6T | @inproceedings{
chung2024adversarial,
title={Adversarial Environment Design via Regret-Guided Diffusion Models},
author={Hojun Chung and Junseo Lee and Minsoo Kim and Dohyeong Kim and Songhwai Oh},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eezCLKwx6T}
} | Training agents that are robust to environmental changes remains a significant challenge in deep reinforcement learning (RL). Unsupervised environment design (UED) has recently emerged to address this issue by generating a set of training environments tailored to the agent's capabilities. While prior works demonstrate that UED has the potential to learn a robust policy, their performance is constrained by the capabilities of the environment generation. To this end, we propose a novel UED algorithm, adversarial environment design via regret-guided diffusion models (ADD). The proposed method guides the diffusion-based environment generator with the regret of the agent to produce environments that the agent finds challenging but conducive to further improvement. By exploiting the representation power of diffusion models, ADD can directly generate adversarial environments while maintaining the diversity of training environments, enabling the agent to effectively learn a robust policy. Our experimental results demonstrate that the proposed method successfully generates an instructive curriculum of environments, outperforming UED baselines in zero-shot generalization across novel, out-of-distribution environments. | Adversarial Environment Design via Regret-Guided Diffusion Models | [
"Hojun Chung",
"Junseo Lee",
"Minsoo Kim",
"Dohyeong Kim",
"Songhwai Oh"
] | NeurIPS.cc/2024/Conference | 2410.19715 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=eddHTvb5eM | @inproceedings{
scarvelis2024nuclear,
title={Nuclear Norm Regularization for Deep Learning},
author={Christopher Scarvelis and Justin Solomon},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eddHTvb5eM}
} | Penalizing the nuclear norm of a function's Jacobian encourages it to locally behave like a low-rank linear map. Such functions vary locally along only a handful of directions, making the Jacobian nuclear norm a natural regularizer for machine learning problems. However, this regularizer is intractable for high-dimensional problems, as it requires computing a large Jacobian matrix and taking its SVD. We show how to efficiently penalize the Jacobian nuclear norm using techniques tailor-made for deep learning. We prove that for functions parametrized as compositions $f = g \circ h$, one may equivalently penalize the average squared Frobenius norm of $Jg$ and $Jh$. We then propose a denoising-style approximation that avoids the Jacobian computations altogether. Our method is simple, efficient, and accurate, enabling Jacobian nuclear norm regularization to scale to high-dimensional deep learning problems. We complement our theory with an empirical study of our regularizer's performance and investigate applications to denoising and representation learning. | Nuclear Norm Regularization for Deep Learning | [
"Christopher Scarvelis",
"Justin Solomon"
] | NeurIPS.cc/2024/Conference | 2405.14544 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=edOZifvwMi | @inproceedings{
zhang2024cryogem,
title={Cryo{GEM}: Physics-Informed Generative Cryo-Electron Microscopy},
author={Jiakai Zhang and Qihe Chen and Yan Zeng and Wenyuan Gao and Xuming He and Zhijie Liu and Jingyi Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=edOZifvwMi}
} | In the past decade, deep conditional generative models have revolutionized the generation of realistic images, extending their application from entertainment to scientific domains. Single-particle cryo-electron microscopy (cryo-EM) is crucial in resolving near-atomic resolution 3D structures of proteins, such as the SARS-COV-2 spike protein. To achieve high-resolution reconstruction, a comprehensive data processing pipeline has been adopted. However, its performance is still limited as it lacks high-quality annotated datasets for training. To address this, we introduce physics-informed generative cryo-electron microscopy (CryoGEM), which for the first time integrates physics-based cryo-EM simulation with a generative unpaired noise translation to generate physically correct synthetic cryo-EM datasets with realistic noises. Initially, CryoGEM simulates the cryo-EM imaging process based on a virtual specimen. To generate realistic noises, we leverage an unpaired noise translation via contrastive learning with a novel mask-guided sampling scheme. Extensive experiments show that CryoGEM is capable of generating authentic cryo-EM images. The generated dataset can be used as training data for particle picking and pose estimation models, eventually improving the reconstruction resolution. | CryoGEM: Physics-Informed Generative Cryo-Electron Microscopy | [
"Jiakai Zhang",
"Qihe Chen",
"Yan Zeng",
"Wenyuan Gao",
"Xuming He",
"Zhijie Liu",
"Jingyi Yu"
] | NeurIPS.cc/2024/Conference | 2312.02235 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ecPIg6o84Z | @inproceedings{
dawidowicz2024imageaware,
title={Image-aware Evaluation of Generated Medical Reports},
author={Gefen Dawidowicz and Elad Hirsch and Ayellet Tal},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ecPIg6o84Z}
} | The paper proposes a novel evaluation metric for automatic medical report generation from X-ray images, VLScore. It aims to overcome the limitations of existing evaluation methods, which either focus solely on textual similarities, ignoring clinical aspects, or concentrate only on a single clinical aspect, the pathology, neglecting all other factors. The key idea of our metric is to measure the similarity between radiology reports while considering the corresponding image. We demonstrate the benefit of our metric through evaluation on a dataset where radiologists marked errors in pairs of reports, showing notable alignment with radiologists' judgments. In addition, we provide a new dataset for evaluating metrics. This dataset includes well-designed perturbations that distinguish between significant modifications (e.g., removal of a diagnosis) and insignificant ones. It highlights the weaknesses in current evaluation metrics and provides a clear framework for analysis. | Image-aware Evaluation of Generated Medical Reports | [
"Gefen Dawidowicz",
"Elad Hirsch",
"Ayellet Tal"
] | NeurIPS.cc/2024/Conference | 2410.17357 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ebBnKVxMcZ | @inproceedings{
coz2024confidence,
title={Confidence Calibration of Classifiers with Many Classes},
author={Adrien Le Coz and St{\'e}phane Herbin and Faouzi Adjed},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ebBnKVxMcZ}
} | For classification models based on neural networks, the maximum predicted class probability is often used as a confidence score. This score rarely predicts well the probability of making a correct prediction and requires a post-processing calibration step. However, many confidence calibration methods fail for problems with many classes. To address this issue, we transform the problem of calibrating a multiclass classifier into calibrating a single surrogate binary classifier. This approach allows for more efficient use of standard calibration methods. We evaluate our approach on numerous neural networks used for image or text classification and show that it significantly enhances existing calibration methods. | Confidence Calibration of Classifiers with Many Classes | [
"Adrien Le Coz",
"Stéphane Herbin",
"Faouzi Adjed"
] | NeurIPS.cc/2024/Conference | 2411.02988 | [
"https://github.com/allglc/tva-calibration"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ea4oxkiMP7 | @inproceedings{
yang2024egochoir,
title={EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views},
author={Yuhang Yang and Wei Zhai and Chengfeng Wang and Chengjun Yu and Yang Cao and Zheng-Jun Zha},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ea4oxkiMP7}
} | Understanding egocentric human-object interaction (HOI) is a fundamental aspect of human-centric perception, facilitating applications like AR/VR and embodied AI. For the egocentric HOI, in addition to perceiving semantics e.g., ''what'' interaction is occurring, capturing ''where'' the interaction specifically manifests in 3D space is also crucial, which links the perception and operation. Existing methods primarily leverage observations of HOI to capture interaction regions from an exocentric view. However, incomplete observations of interacting parties in the egocentric view introduce ambiguity between visual observations and interaction contents, impairing their efficacy. From the egocentric view, humans integrate the visual cortex, cerebellum, and brain to internalize their intentions and interaction concepts of objects, allowing for the pre-formulation of interactions and making behaviors even when interaction regions are out of sight. In light of this, we propose harmonizing the visual appearance, head motion, and 3D object to excavate the object interaction concept and subject intention, jointly inferring 3D human contact and object affordance from egocentric videos. To achieve this, we present EgoChoir, which links object structures with interaction contexts inherent in appearance and head motion to reveal object affordance, further utilizing it to model human contact. Additionally, a gradient modulation is employed to adopt appropriate clues for capturing interaction regions across various egocentric scenarios. Moreover, 3D contact and affordance are annotated for egocentric videos collected from Ego-Exo4D and GIMO to support the task. Extensive experiments on them demonstrate the effectiveness and superiority of EgoChoir. | EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views | [
"Yuhang Yang",
"Wei Zhai",
"Chengfeng Wang",
"Chengjun Yu",
"Yang Cao",
"Zheng-Jun Zha"
] | NeurIPS.cc/2024/Conference | 2405.13659 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eYNYnYle41 | @inproceedings{
huang2024navigating,
title={Navigating the Effect of Parametrization for Dimensionality Reduction},
author={Haiyang Huang and Yingfan Wang and Cynthia Rudin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eYNYnYle41}
} | Parametric dimensionality reduction methods have gained prominence for their ability to generalize to unseen datasets, an advantage that traditional non-parametric approaches typically lack. Despite their growing popularity, there remains a prevalent misconception among practitioners about the equivalence in performance between parametric and non-parametric methods. Here, we show that these methods are not equivalent -- parametric methods retain global structure but lose significant local details. To explain this, we provide evidence that parameterized approaches lack the ability to repulse negative samples, and the choice of loss function also has an impact.
Addressing these issues, we developed a new parametric method, ParamRepulsor, that incorporates Hard Negative Mining and a loss function that applies a strong repulsive force. This new method achieves state-of-the-art performance on local structure preservation for parametric methods without sacrificing the fidelity of global structural representation. Our code is available at https://github.com/hyhuang00/ParamRepulsor. | Navigating the Effect of Parametrization for Dimensionality Reduction | [
"Haiyang Huang",
"Yingfan Wang",
"Cynthia Rudin"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eXNyq8FGSz | @inproceedings{
diakonikolas2024active,
title={Active Learning of General Halfspaces: Label Queries vs Membership Queries},
author={Ilias Diakonikolas and Daniel Kane and Mingchen Ma},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eXNyq8FGSz}
} | We study the problem of learning general (i.e., not necessarily homogeneous)
halfspaces under the Gaussian distribution on $\mathbb{R}^d$
in the presence of some form of query access.
In the classical pool-based active learning model, where the algorithm is
allowed to make adaptive label queries to previously sampled points,
we establish a strong information-theoretic lower bound ruling out non-trivial
improvements over the passive setting. Specifically, we show that
any active learner requires label complexity of
$\tilde{\Omega}(d/(\log(m)\epsilon))$, where $m$ is the number of unlabeled examples.
Specifically, to beat the passive label complexity of $\tilde{O}(d/\epsilon)$,
an active learner requires a pool of $2^{\mathrm{poly}(d)}$ unlabeled samples.
On the positive side, we show that this lower bound
can be circumvented with membership query access,
even in the agnostic model. Specifically, we give a computationally efficient
learner with query complexity of $\tilde{O}(\min(1/p, 1/\epsilon) + d\mathrm{polylog}(1/\epsilon))$
achieving error guarantee of $O(\mathrm{opt}+\epsilon)$. Here $p \in [0, 1/2]$
is the bias and $\mathrm{opt}$ is the 0-1 loss of the optimal halfspace.
As a corollary, we obtain a strong separation
between the active and membership query models.
Taken together, our results characterize the complexity of learning
general halfspaces under Gaussian marginals in these models. | Active Learning of General Halfspaces: Label Queries vs Membership Queries | [
"Ilias Diakonikolas",
"Daniel Kane",
"Mingchen Ma"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eWiGn0Fcdx | @inproceedings{
zhan2024exploring,
title={Exploring Token Pruning in Vision State Space Models},
author={Zheng Zhan and Zhenglun Kong and Yifan Gong and Yushu Wu and Zichong Meng and Hangyu Zheng and Xuan Shen and Stratis Ioannidis and Wei Niu and Pu Zhao and Yanzhi Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eWiGn0Fcdx}
} | State Space Models (SSMs) have the advantage of keeping linear computational complexity compared to attention modules in transformers, and have been applied to vision tasks as a new type of powerful vision foundation model. Inspired by the observations that the final prediction in vision transformers (ViTs) is only based on a subset of most informative tokens, we take the novel step of enhancing the efficiency of SSM-based vision models through token-based pruning. However, direct applications of existing token pruning techniques designed for ViTs fail to deliver good performance, even with extensive fine-tuning. To address this issue, we revisit the unique computational characteristics of SSMs and discover that naive application disrupts the sequential token positions. This insight motivates us to design a novel and general token pruning method specifically for SSM-based vision models. We first introduce a pruning-aware hidden state alignment method to stabilize the neighborhood of remaining tokens for performance enhancement. Besides, based on our detailed analysis, we propose a token importance evaluation method adapted for SSM models, to guide the token pruning. With efficient implementation and practical acceleration methods, our method brings actual speedup. Extensive experiments demonstrate that our approach can achieve significant computation reduction with minimal impact on performance across different tasks. Notably, we achieve 81.7\% accuracy on ImageNet with a 41.6\% reduction in the FLOPs for pruned PlainMamba-L3. Furthermore, our work provides deeper insights into understanding the behavior of SSM-based vision models for future research. | Exploring Token Pruning in Vision State Space Models | [
"Zheng Zhan",
"Zhenglun Kong",
"Yifan Gong",
"Yushu Wu",
"Zichong Meng",
"Hangyu Zheng",
"Xuan Shen",
"Stratis Ioannidis",
"Wei Niu",
"Pu Zhao",
"Yanzhi Wang"
] | NeurIPS.cc/2024/Conference | 2409.18962 | [
""
] | https://huggingface.co/papers/2409.18962 | 1 | 0 | 0 | 11 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=eWUM5hRYgH | @inproceedings{
peng2024statistical,
title={Statistical Efficiency of Distributional Temporal Difference Learning},
author={Yang Peng and Liangyu Zhang and Zhihua Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eWUM5hRYgH}
} | Distributional reinforcement learning (DRL) has achieved empirical success in various domains.
One of the core tasks in the field of DRL is distributional policy evaluation, which involves estimating the return distribution $\eta^\pi$ for a given policy $\pi$.
The distributional temporal difference learning has been accordingly proposed, which
is an extension of the temporal difference learning (TD) in the classic RL area.
In the tabular case, Rowland et al. [2018] and Rowland et al. [2023] proved the asymptotic convergence of two instances of distributional TD, namely categorical temporal difference learning (CTD) and quantile temporal difference learning (QTD), respectively.
In this paper, we go a step further and analyze the finite-sample performance of distributional TD.
To facilitate theoretical analysis, we propose a non-parametric distributional TD learning (NTD).
For a $\gamma$-discounted infinite-horizon tabular Markov decision process,
we show that for NTD we need $\widetilde O\left(\frac{1}{\varepsilon^{2p}(1-\gamma)^{2p+1}}\right)$ iterations to achieve an $\varepsilon$-optimal estimator with high probability, when the estimation error is measured by the $p$-Wasserstein distance.
This sample complexity bound is minimax optimal (up to logarithmic factors) in the case of the $1$-Wasserstein distance.
To achieve this, we establish a novel Freedman's inequality in Hilbert spaces, which would be of independent interest.
In addition, we revisit CTD, showing that the same non-asymptotic convergence bounds hold for CTD in the case of the $p$-Wasserstein distance. | Statistical Efficiency of Distributional Temporal Difference Learning | [
"Yang Peng",
"Liangyu Zhang",
"Zhihua Zhang"
] | NeurIPS.cc/2024/Conference | 2403.05811 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=eV5YIrJPdy | @inproceedings{
sarrof2024the,
title={The Expressive Capacity of State Space Models: A Formal Language Perspective},
author={Yash Sarrof and Yana Veitsman and Michael Hahn},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eV5YIrJPdy}
} | Recently, recurrent models based on linear state space models (SSMs) have shown promising performance in language modeling (LM), competititve with transformers. However, there is little understanding of the in-principle abilities of such models, which could provide useful guidance to the search for better LM architectures. We present a comprehensive theoretical study of the capacity of such SSMs as it compares to that of transformers and traditional RNNs. We find that SSMs and transformers have overlapping but distinct strengths. In star-free state tracking, SSMs implement straightforward and exact solutions to problems that transformers struggle to represent exactly. They can also model bounded hierarchical structure with optimal memory even without simulating a stack. On the other hand, we identify a design choice in current SSMs that limits their expressive power. We discuss implications for SSM and LM research, and verify results empirically on a recent SSM, Mamba. | The Expressive Capacity of State Space Models: A Formal Language Perspective | [
"Yash Sarrof",
"Yana Veitsman",
"Michael Hahn"
] | NeurIPS.cc/2024/Conference | 2405.17394 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eUg64OsGDE | @inproceedings{
amini-naieni2024countgd,
title={Count{GD}: Multi-Modal Open-World Counting},
author={Niki Amini-Naieni and Tengda Han and Andrew Zisserman},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eUg64OsGDE}
} | The goal of this paper is to improve the generality and accuracy of open-vocabulary object counting in images. To improve the generality, we repurpose an open-vocabulary detection foundation model (GroundingDINO) for the counting task, and also extend its capabilities by introducing modules to enable specifying the target object to count by visual exemplars. In turn, these new capabilities -- being able to specify the target object by multi-modalites (text and exemplars) -- lead to an improvement in counting accuracy. We make three contributions: First, we introduce the first open-world counting model, CountGD, where the prompt can be specified by a text description or visual exemplars or both; Second, we show that the performance of the model significantly improves the state of the art on multiple counting benchmarks -- when using text only, CountGD outperforms all previous text-only works, and when using both text and visual exemplars, we outperform all previous models; Third, we carry out a preliminary study into different interactions between the text and visual exemplar prompts, including the cases where they reinforce each other and where one restricts the other. The code and an app to test the model are available at https://www.robots.ox.ac.uk/vgg/research/countgd/. | CountGD: Multi-Modal Open-World Counting | [
"Niki Amini-Naieni",
"Tengda Han",
"Andrew Zisserman"
] | NeurIPS.cc/2024/Conference | 2407.04619 | [
""
] | https://huggingface.co/papers/2407.04619 | 1 | 0 | 1 | 3 | [] | [] | [
"nikigoli/countgd",
"XilingR/yopo"
] | [] | [] | [
"nikigoli/countgd",
"XilingR/yopo"
] | 1 | poster |
null | https://openreview.net/forum?id=eUcyIe1AzY | @inproceedings{
zachos2024generating,
title={Generating Origin-Destination Matrices in Neural Spatial Interaction Models},
author={Ioannis Zachos and Mark Girolami and Theodoros Damoulas},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eUcyIe1AzY}
} | Agent-based models (ABMs) are proliferating as decision-making tools across policy areas in transportation, economics, and epidemiology. In these models, a central object of interest is the discrete origin-destination matrix which captures spatial interactions and agent trip counts between locations. Existing approaches resort to continuous approximations of this matrix and subsequent ad-hoc discretisations in order to perform ABM simulation and calibration. This impedes conditioning on partially observed summary statistics, fails to explore the multimodal matrix distribution over a discrete combinatorial support, and incurs discretisation errors. To address these challenges, we introduce a computationally efficient framework that scales linearly with the number of origin-destination pairs, operates directly on the discrete combinatorial space, and learns the agents' trip intensity through a neural differential equation that embeds spatial interactions. Our approach outperforms the prior art in terms of reconstruction error and ground truth matrix coverage, at a fraction of the computational cost. We demonstrate these benefits in two large-scale spatial mobility ABMs in Washington, DC and Cambridge, UK. | Generating Origin-Destination Matrices in Neural Spatial Interaction Models | [
"Ioannis Zachos",
"Mark Girolami",
"Theodoros Damoulas"
] | NeurIPS.cc/2024/Conference | 2410.07352 | [
"https://github.com/YannisZa/GeNSIT"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eU87jJyEK5 | @inproceedings{
guo2024spear,
title={SpeAr: A Spectral Approach for Zero-Shot Node Classification},
author={Ting Guo and Da Wang and Jiye Liang and Kaihan Zhang and Jianchao Zeng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eU87jJyEK5}
} | Zero-shot node classification is a vital task in the field of graph data processing, aiming to identify nodes of classes unseen during the training process. Prediction bias is one of the primary challenges in zero-shot node classification, referring to the model's propensity to misclassify nodes of unseen classes as seen classes. However, most methods introduce external knowledge to mitigate the bias, inadequately leveraging the inherent cluster information within the unlabeled nodes. To address this issue, we employ spectral analysis coupled with learnable class prototypes to discover the implicit cluster structures within the graph, providing a more comprehensive understanding of classes. In this paper, we propose a spectral approach for zero-shot node classification (SpeAr). Specifically, we establish an approximate relationship between minimizing the spectral contrastive loss and performing spectral decomposition on the graph, thereby enabling effective node characterization through loss minimization. Subsequently, the class prototypes are iteratively refined based on the learned node representations, initialized with the semantic vectors. Finally, extensive experiments verify the effectiveness of the SpeAr, which can further alleviate the bias problem. | SpeAr: A Spectral Approach for Zero-Shot Node Classification | [
"Ting Guo",
"Da Wang",
"Jiye Liang",
"Kaihan Zhang",
"Jianchao Zeng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eTu6kvrkSq | @inproceedings{
innocenti2024only,
title={Only Strict Saddles in the Energy Landscape of Predictive Coding Networks?},
author={Francesco Innocenti and El Mehdi Achour and Ryan Singh and Christopher Buckley},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eTu6kvrkSq}
} | Predictive coding (PC) is an energy-based learning algorithm that performs iterative inference over network activities before updating weights. Recent work suggests that PC can converge in fewer learning steps than backpropagation thanks to its inference procedure. However, these advantages are not always observed, and the impact of PC inference on learning is not theoretically well understood. Here, we study the geometry of the PC energy landscape at the inference equilibrium of the network activities. For deep linear networks, we first show that the equilibrated energy is simply a rescaled mean squared error loss with a weight-dependent rescaling. We then prove that many highly degenerate (non-strict) saddles of the loss including the origin become much easier to escape (strict) in the equilibrated energy. Our theory is validated by experiments on both linear and non-linear networks. Based on these and other results, we conjecture that all the saddles of the equilibrated energy are strict. Overall, this work suggests that PC inference makes the loss landscape more benign and robust to vanishing gradients, while also highlighting the fundamental challenge of scaling PC to deeper models. | Only Strict Saddles in the Energy Landscape of Predictive Coding Networks? | [
"Francesco Innocenti",
"El Mehdi Achour",
"Ryan Singh",
"Christopher Buckley"
] | NeurIPS.cc/2024/Conference | 2408.11979 | [
"https://github.com/francesco-innocenti/pc-saddles"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eSes1Mic9d | @inproceedings{
ghandeharioun2024whos,
title={Who's asking? User personas and the mechanics of latent misalignment},
author={Asma Ghandeharioun and Ann Yuan and Marius Guerard and Emily Reif and Michael A. Lepori and Lucas Dixon},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eSes1Mic9d}
} | Studies show that safety-tuned models may nevertheless divulge harmful information. In this work, we show that whether they do so depends significantly on who they are talking to, which we refer to as *user persona*. In fact, we find manipulating user persona to be more effective for eliciting harmful content than certain more direct attempts to control model refusal. We study both natural language prompting and activation steering as intervention methods and show that activation steering is significantly more effective at bypassing safety filters.
We shed light on the mechanics of this phenomenon by showing that even when model generations are safe, harmful content can persist in hidden representations and can be extracted by decoding from earlier layers. We also show we can predict a persona’s effect on refusal given only the geometry of its steering vector. Finally, we show that certain user personas induce the model to form more charitable interpretations of otherwise dangerous queries. | Who's asking? User personas and the mechanics of latent misalignment | [
"Asma Ghandeharioun",
"Ann Yuan",
"Marius Guerard",
"Emily Reif",
"Michael A. Lepori",
"Lucas Dixon"
] | NeurIPS.cc/2024/Conference | 2406.12094 | [
""
] | https://huggingface.co/papers/2406.12094 | 4 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=eQ6VjBhevn | @inproceedings{
farina2024frustratingly,
title={Frustratingly Easy Test-Time Adaptation of Vision-Language Models},
author={Matteo Farina and Gianni Franchi and Giovanni Iacca and Massimiliano Mancini and Elisa Ricci},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eQ6VjBhevn}
} | Vision-Language Models seamlessly discriminate among arbitrary semantic categories, yet they still suffer from poor generalization when presented with challenging examples. For this reason, Episodic Test-Time Adaptation (TTA) strategies have recently emerged as powerful techniques to adapt VLMs in the presence of a single unlabeled image. The recent literature on TTA is dominated by the paradigm of prompt tuning by Marginal Entropy Minimization, which, relying on online backpropagation, inevitably slows down inference while increasing memory. In this work, we theoretically investigate the properties of this approach and unveil that a surprisingly strong TTA method lies dormant and hidden within it. We term this approach ZERO (TTA with “zero” temperature), whose design is both incredibly effective and frustratingly simple: augment N times, predict, retain the most confident predictions, and marginalize after setting the Softmax temperature to zero. Remarkably, ZERO requires a single batched forward pass through the vision encoder only and no backward passes. We thoroughly evaluate our approach following the experimental protocol established in the literature and show that ZERO largely surpasses or compares favorably w.r.t. the state-of-the-art while being almost 10× faster and 13× more memory friendly than standard Test-Time Prompt Tuning. Thanks to its simplicity and comparatively negligible computation, ZERO can serve as a strong baseline for future work in this field. Code will be available. | Frustratingly Easy Test-Time Adaptation of Vision-Language Models | [
"Matteo Farina",
"Gianni Franchi",
"Giovanni Iacca",
"Massimiliano Mancini",
"Elisa Ricci"
] | NeurIPS.cc/2024/Conference | 2405.18330 | [
"https://github.com/farinamatteo/zero"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ePOBcWfNFC | @inproceedings{
hu2024disentangled,
title={Disentangled Unsupervised Skill Discovery for Efficient Hierarchical Reinforcement Learning},
author={Jiaheng Hu and Zizhao Wang and Peter Stone and Roberto Mart{\'\i}n-Mart{\'\i}n},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ePOBcWfNFC}
} | A hallmark of intelligent agents is the ability to learn reusable skills purely from unsupervised interaction with the environment. However, existing unsupervised skill discovery methods often learn entangled skills where one skill variable simultaneously influences many entities in the environment, making downstream skill chaining extremely challenging. We propose Disentangled Unsupervised Skill Discovery (DUSDi), a method for learning disentangled skills that can be efficiently reused to solve downstream tasks. DUSDi decomposes skills into disentangled components, where each skill component only affects one factor of the state space. Importantly, these skill components can be concurrently composed to generate low-level actions, and efficiently chained to tackle downstream tasks through hierarchical Reinforcement Learning. DUSDi defines a novel mutual-information-based objective to enforce disentanglement between the influences of different skill components, and utilizes value factorization to optimize this objective efficiently. Evaluated in a set of challenging environments, DUSDi successfully learns disentangled skills, and significantly outperforms previous skill discovery methods when it comes to applying the learned skills to solve downstream tasks. | Disentangled Unsupervised Skill Discovery for Efficient Hierarchical Reinforcement Learning | [
"Jiaheng Hu",
"Zizhao Wang",
"Peter Stone",
"Roberto Martín-Martín"
] | NeurIPS.cc/2024/Conference | 2410.11251 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eP9auEJqFg | @inproceedings{
rosati2024representation,
title={Representation Noising: A Defence Mechanism Against Harmful Finetuning},
author={Domenic Rosati and Jan Wehner and Kai Williams and Lukasz Bartoszcze and Robie Gonzales and carsten maple and Subhabrata Majumdar and Hassan Sajjad and Frank Rudzicz},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eP9auEJqFg}
} | Releasing open-source large language models (LLMs) presents a dual-use risk since bad actors can easily fine-tune these models for harmful purposes. Even without the open release of weights, weight stealing and fine-tuning APIs make closed models vulnerable to harmful fine-tuning attacks (HFAs). While safety measures like preventing jailbreaks and improving safety guardrails are important, such measures can easily be reversed through fine-tuning. In this work, we propose Representation Noising (\textsf{\small RepNoise}), a defence mechanism that operates even when attackers have access to the weights. \textsf{\small RepNoise} works by removing information about harmful representations such that it is difficult to recover them during fine-tuning. Importantly, our defence is also able to generalize across different subsets of harm that have not been seen during the defence process as long as they are drawn from the same distribution of the attack set. Our method does not degrade the general capability of LLMs and retains the ability to train the model on harmless tasks. We provide empirical evidence that the efficacy of our defence lies in its ``depth'': the degree to which information about harmful representations is removed across {\em all layers} of the LLM. We also find areas where \textsf{\small RepNoise} still remains ineffective and highlight how those limitations can inform future research. | Representation Noising: A Defence Mechanism Against Harmful Finetuning | [
"Domenic Rosati",
"Jan Wehner",
"Kai Williams",
"Lukasz Bartoszcze",
"Robie Gonzales",
"carsten maple",
"Subhabrata Majumdar",
"Hassan Sajjad",
"Frank Rudzicz"
] | NeurIPS.cc/2024/Conference | 2405.14577 | [
"https://github.com/domenicrosati/representation-noising"
] | https://huggingface.co/papers/2405.14577 | 1 | 1 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=eOx0SMRUv7 | @inproceedings{
so2024online,
title={Online Consistency of the Nearest Neighbor Rule},
author={Geelon So and Sanjoy Dasgupta},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eOx0SMRUv7}
} | In the realizable online setting, a learner is tasked with making predictions for a stream of instances, where the correct answer is revealed after each prediction. A learning rule is online consistent if its mistake rate eventually vanishes. The nearest neighbor rule is fundamental prediction strategy, but it is only known to be consistent under strong statistical or geometric assumptions: the instances come i.i.d. or the label classes are well-separated. We prove online consistency for all measurable functions in doubling metric spaces under the mild assumption that instances are generated by a process that is uniformly absolutely continuous with respect to an underlying finite, upper doubling measure. | Online Consistency of the Nearest Neighbor Rule | [
"Geelon So",
"Sanjoy Dasgupta"
] | NeurIPS.cc/2024/Conference | 2410.23644 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eOonmxzzno | @inproceedings{
dong2024temporal,
title={Temporal Sentence Grounding with Relevance Feedback in Videos},
author={Jianfeng Dong and Xiaoman Peng and Daizong Liu and Xiaoye Qu and Xun Yang and Cuizhu Bao and Meng Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eOonmxzzno}
} | As a widely explored multi-modal task, Temporal Sentence Grounding in videos (TSG) endeavors to retrieve a specific video segment matched with a given query text from a video. The traditional paradigm for TSG generally assumes that relevant segments always exist within a given video. However, this assumption is restrictive and unrealistic in real-world applications where the existence of a query-related segment is uncertain, easily resulting in erroneous grounding. Motivated by the research gap and practical application, this paper introduces a new task, named Temporal Sentence Grounding with Relevance Feedback (TSG-RF) in videos, which accommodates the possibility that a video may or may not include a segment related to the query. This task entails localizing precise video segments that semantically align with the query text when such content is present, while delivering definitive feedback on the non-existence of related segments when absent. Moreover, we propose a novel Relation-aware Temporal Sentence Grounding (RaTSG) network for addressing this challenging task. This network first reformulates the TSG-RF task as a foreground-background detection problem by investigating whether the query-related semantics exist in both frame and video levels. Then, a multi-granularity relevance discriminator is exploited to produce precise video-query relevance feedback and a relation-aware segment grounding module is employed to selectively conduct the grounding process, dynamically adapting to the presence or absence of query-related segments in videos. To validate our RaTSG network, we reconstruct two popular TSG datasets, establishing a rigorous benchmark for TSG-RF. Experimental results demonstrate the effectiveness of our proposed RaTSG for the TSG-RF task. Our source code is available at https://github.com/HuiGuanLab/RaTSG. | Temporal Sentence Grounding with Relevance Feedback in Videos | [
"Jianfeng Dong",
"Xiaoman Peng",
"Daizong Liu",
"Xiaoye Qu",
"Xun Yang",
"Cuizhu Bao",
"Meng Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eOAPWWOGs9 | @inproceedings{
lu2024autopsv,
title={Auto{PSV}: Automated Process-Supervised Verifier},
author={Jianqiao Lu and Zhiyang Dou and Hongru WANG and Zeyu Cao and Jianbo Dai and Yunlong Feng and Zhijiang Guo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eOAPWWOGs9}
} | In this work, we propose a novel method named \textbf{Auto}mated \textbf{P}rocess-\textbf{S}upervised \textbf{V}erifier (\textbf{\textsc{AutoPSV}}) to enhance the reasoning capabilities of large language models (LLMs) by automatically annotating the reasoning steps.
\textsc{AutoPSV} begins by training a verification model on the correctness of final answers, enabling it to generate automatic process annotations.
This verification model assigns a confidence score to each reasoning step, indicating the probability of arriving at the correct final answer from that point onward.
We detect relative changes in the verification's confidence scores across reasoning steps to automatically annotate the reasoning process, enabling error detection even in scenarios where ground truth answers are unavailable.
This alleviates the need for numerous manual annotations or the high computational costs associated with model-induced annotation approaches.
We experimentally validate that the step-level confidence changes learned by the verification model trained on the final answer correctness can effectively identify errors in the reasoning steps.
We demonstrate that the verification model, when trained on process annotations generated by \textsc{AutoPSV}, exhibits improved performance in selecting correct answers from multiple LLM-generated outputs.
Notably, we achieve substantial improvements across five datasets in mathematics and commonsense reasoning. The source code of \textsc{AutoPSV} is available at \url{https://github.com/rookie-joe/AutoPSV}. | AutoPSV: Automated Process-Supervised Verifier | [
"Jianqiao Lu",
"Zhiyang Dou",
"Hongru WANG",
"Zeyu Cao",
"Jianbo Dai",
"Yunlong Feng",
"Zhijiang Guo"
] | NeurIPS.cc/2024/Conference | 2405.16802 | [
"https://github.com/rookie-joe/autocv"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eNvVjpx97O | @inproceedings{
li2024streamingdialogue,
title={StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses},
author={Jia-Nan Li and Quan Tu and Cunli Mao and Zhengtao Yu and Ji-Rong Wen and Rui Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eNvVjpx97O}
} | Standard Large Language Models (LLMs) struggle with handling dialogues with long contexts due to efficiency and consistency issues. According to our observation, dialogue contexts are highly structured, and the special token of End-of-Utterance (EoU) in dialogues has the potential to aggregate information. We refer to the EoU tokens as ``conversational attention sinks'' (conv-attn sinks). Accordingly, we introduce StreamingDialogue, which compresses long dialogue history into conv-attn sinks with minimal losses, and thus reduces computational complexity quadratically with the number of sinks (i.e., the number of utterances). Current LLMs already demonstrate the ability to handle long context window, e.g., a window size of 200K or more. To this end, by compressing utterances into EoUs, our method has the potential to handle more than 200K of utterances, resulting in a prolonged dialogue learning. In order to minimize information losses from reconstruction after compression, we design two learning strategies of short-memory reconstruction (SMR) and long-memory reactivation (LMR). Our method outperforms strong baselines in dialogue tasks and achieves a 4 $\times$ speedup while reducing memory usage by 18 $\times$ compared to dense attention recomputation. | StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses | [
"Jia-Nan Li",
"Quan Tu",
"Cunli Mao",
"Zhengtao Yu",
"Ji-Rong Wen",
"Rui Yan"
] | NeurIPS.cc/2024/Conference | 2403.08312 | [
"https://github.com/jinaleejnl/streamingdialogue"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eNeqGc9AgR | @inproceedings{
zhang2024flatten,
title={Flatten Anything: Unsupervised Neural Surface Parameterization},
author={Qijian Zhang and Junhui Hou and Wenping Wang and Ying He},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eNeqGc9AgR}
} | Surface parameterization plays an essential role in numerous computer graphics and geometry processing applications. Traditional parameterization approaches are designed for high-quality meshes laboriously created by specialized 3D modelers, thus unable to meet the processing demand for the current explosion of ordinary 3D data. Moreover, their working mechanisms are typically restricted to certain simple topologies, thus relying on cumbersome manual efforts (e.g., surface cutting, part segmentation) for pre-processing. In this paper, we introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization via learning point-wise mappings between 3D points on the target geometric surface and adaptively-deformed UV coordinates within the 2D parameter domain. To mimic the actual physical procedures, we ingeniously construct geometrically-interpretable sub-networks with specific functionalities of surface cutting, UV deforming, unwrapping, and wrapping, which are assembled into a bi-directional cycle mapping framework. Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information, thus significantly reducing the strict requirements for mesh quality and even applicable to unstructured point cloud data. More importantly, our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies, since its learning process adaptively finds reasonable cutting seams and UV boundaries. Extensive experiments demonstrate the universality, superiority, and inspiring potential of our proposed neural surface parameterization paradigm. Our code is available at https://github.com/keeganhk/FlattenAnything. | Flatten Anything: Unsupervised Neural Surface Parameterization | [
"Qijian Zhang",
"Junhui Hou",
"Wenping Wang",
"Ying He"
] | NeurIPS.cc/2024/Conference | 2405.14633 | [
"https://github.com/keeganhk/flattenanything"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eNM94i7R3A | @inproceedings{
kunin2024get,
title={Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning},
author={Daniel Kunin and Allan Raventos and Cl{\'e}mentine Carla Juliette Domin{\'e} and Feng Chen and David Klindt and Andrew M Saxe and Surya Ganguli},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eNM94i7R3A}
} | While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this *rich feature learning regime* remain elusive, with much of our theoretical understanding stemming from the opposing *lazy regime*. In this work, we derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced *layer-specific* initialization variances and learning rates determine the degree of feature learning. Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities that constrain and modify the geometry of learning trajectories in parameter and function space. We extend our analysis to more complex linear models with multiple neurons, outputs, and layers and to shallow nonlinear networks with piecewise linear activation functions. In linear networks, rapid feature learning only occurs from balanced initializations, where all layers learn at similar speeds. While in nonlinear networks, unbalanced initializations that promote faster learning in earlier layers can accelerate rich learning. Through a series of experiments, we provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic. Our theory motivates further exploration of unbalanced initializations to enhance efficient feature learning. | Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning | [
"Daniel Kunin",
"Allan Raventos",
"Clémentine Carla Juliette Dominé",
"Feng Chen",
"David Klindt",
"Andrew M Saxe",
"Surya Ganguli"
] | NeurIPS.cc/2024/Conference | 2406.06158 | [
"https://github.com/allanraventos/getrichquick"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=eNCYpTCGhr | @inproceedings{
kornowski2024firstorder,
title={First-Order Methods for Linearly Constrained Bilevel Optimization},
author={Guy Kornowski and Swati Padmanabhan and Kai Wang and Zhe Zhang and Suvrit Sra},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eNCYpTCGhr}
} | Algorithms for bilevel optimization often encounter Hessian computations, which are prohibitive in high dimensions. While recent works offer first-order methods for unconstrained bilevel problems, the constrained setting remains relatively underexplored. We present first-order linearly constrained optimization methods with finite-time hypergradient stationarity guarantees. For linear equality constraints, we attain $\epsilon$-stationarity in $\widetilde{O}(\epsilon^{-2})$ gradient oracle calls, which is nearly-optimal.
For linear inequality constraints, we attain $(\delta,\epsilon)$-Goldstein stationarity in $\widetilde{O}(d{\delta^{-1} \epsilon^{-3}})$ gradient oracle calls, where $d$ is the upper-level dimension.
Finally, we obtain for the linear inequality setting dimension-free rates of $\widetilde{O}({\delta^{-1} \epsilon^{-4}})$ oracle complexity under the additional assumption of oracle access to the optimal dual variable. Along the way, we develop new nonsmooth nonconvex optimization methods with inexact oracles. Our numerical experiments verify these guarantees. | First-Order Methods for Linearly Constrained Bilevel Optimization | [
"Guy Kornowski",
"Swati Padmanabhan",
"Kai Wang",
"Zhe Zhang",
"Suvrit Sra"
] | NeurIPS.cc/2024/Conference | 2406.12771 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eM5d7ZmekA | @inproceedings{
zhang2024geolrm,
title={Geo{LRM}: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation},
author={Chubin Zhang and Hongliang Song and Yi Wei and Chen Yu and Jiwen Lu and Yansong Tang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eM5d7ZmekA}
} | In this work, we introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory. Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images. This limits these methods to a low-resolution representation and makes it difficult to scale up to the dense views for better quality. GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms to effectively integrate image features into 3D representations. We implement this solution through a two-stage pipeline: initially, a lightweight proposal network generates a sparse set of 3D anchor points from the posed image inputs; subsequently, a specialized reconstruction transformer refines the geometry and retrieves textural details. Extensive experimental results demonstrate that GeoLRM significantly outperforms existing models, especially for dense view inputs. We also demonstrate the practical applicability of our model with 3D generation tasks, showcasing its versatility and potential for broader adoption in real-world applications. The project page: https://linshan-bin.github.io/GeoLRM/. | GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation | [
"Chubin Zhang",
"Hongliang Song",
"Yi Wei",
"Chen Yu",
"Jiwen Lu",
"Yansong Tang"
] | NeurIPS.cc/2024/Conference | 2406.15333 | [
"https://github.com/alibaba-yuanjing-aigclab/geolrm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eKVugi5zr0 | @inproceedings{
huch2024rome,
title={Ro{ME}: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions},
author={Easton Knight Huch and Jieru Shi and Madeline R Abbott and Jessica R Golbus and Alexander Moreno and Walter H. Dempsey},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eKVugi5zr0}
} | Mobile health leverages personalized and contextually tailored interventions optimized through bandit and reinforcement learning algorithms. In practice, however, challenges such as participant heterogeneity, nonstationarity, and nonlinear relationships hinder algorithm performance. We propose RoME, a **Ro**bust **M**ixed-**E**ffects contextual bandit algorithm that simultaneously addresses these challenges via (1) modeling the differential reward with user- and time-specific random effects, (2) network cohesion penalties, and (3) debiased machine learning for flexible estimation of baseline rewards. We establish a high-probability regret bound that depends solely on the dimension of the differential-reward model, enabling us to achieve robust regret bounds even when the baseline reward is highly complex. We demonstrate the superior performance of the RoME algorithm in a simulation and two off-policy evaluation studies. | RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions | [
"Easton Knight Huch",
"Jieru Shi",
"Madeline R Abbott",
"Jessica R Golbus",
"Alexander Moreno",
"Walter H. Dempsey"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eKSRTlzRWG | @inproceedings{
peng2024structure,
title={Structure Consistent Gaussian Splatting with Matching Prior for Few-shot Novel View Synthesis},
author={Rui Peng and Wangze Xu and Luyang Tang and Liwei Liao and Jianbo Jiao and Ronggang Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eKSRTlzRWG}
} | Despite the substantial progress of novel view synthesis, existing methods, either based on the Neural Radiance Fields (NeRF) or more recently 3D Gaussian Splatting (3DGS), suffer significant degradation when the input becomes sparse. Numerous efforts have been introduced to alleviate this problem, but they still struggle to synthesize satisfactory results efficiently, especially in the large scene. In this paper, we propose SCGaussian, a Structure Consistent Gaussian Splatting method using matching priors to learn 3D consistent scene structure. Considering the high interdependence of Gaussian attributes, we optimize the scene structure in two folds: rendering geometry and, more importantly, the position of Gaussian primitives, which is hard to be directly constrained in the vanilla 3DGS due to the non-structure property. To achieve this, we present a hybrid Gaussian representation. Besides the ordinary non-structure Gaussian primitives, our model also consists of ray-based Gaussian primitives that are bound to matching rays and whose optimization of their positions is restricted along the ray. Thus, we can utilize the matching correspondence to directly enforce the position of these Gaussian primitives to converge to the surface points where rays intersect. Extensive experiments on forward-facing, surrounding, and complex large scenes show the effectiveness of our approach with state-of-the-art performance and high efficiency. Code is available at https://github.com/prstrive/SCGaussian. | Structure Consistent Gaussian Splatting with Matching Prior for Few-shot Novel View Synthesis | [
"Rui Peng",
"Wangze Xu",
"Luyang Tang",
"Liwei Liao",
"Jianbo Jiao",
"Ronggang Wang"
] | NeurIPS.cc/2024/Conference | 2411.03637 | [
"https://github.com/prstrive/scgaussian"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eKHQbgvL3G | @inproceedings{
park2024trackime,
title={Track{IME}: Enhanced Video Point Tracking via Instance Motion Estimation},
author={Seong Hyeon Park and Huiwon Jang and Byungwoo Jeon and Sukmin Yun and Paul Hongsuck Seo and Jinwoo Shin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eKHQbgvL3G}
} | Tracking points in video frames is essential for understanding video content. However, the task is fundamentally hindered by the computation demands for brute-force correspondence matching across the frames. As the current models down-sample the frame resolutions to mitigate this challenge, they fall short in accurately representing point trajectories due to information truncation. Instead, we address the challenge by pruning the search space for point tracking and let the model process only the important regions of the frames without down-sampling. Our first key idea is to identify the object instance and its trajectory over the frames, then prune the regions of the frame that do not contain the instance. Concretely, to estimate the instance’s trajectory, we track a group of points on the instance and aggregate their motion trajectories. Furthermore, to deal with the occlusions in complex scenes, we propose to compensate for the occluded points while tracking. To this end, we introduce a unified framework that jointly performs point tracking and segmentation, providing synergistic effects between the two tasks. For example, the segmentation results enable a tracking model to avoid the occluded points referring to the instance mask, and conversely, the improved tracking results can help to produce more accurate segmentation masks. Our framework can be easily incorporated with various tracking models, and we demonstrate its efficacy for enhanced point tracking throughout extensive experiments. For example, on the recent TAP-Vid benchmark, our framework consistently improves all baselines, e.g., up to 13.5% improvement on the average Jaccard metric. | TrackIME: Enhanced Video Point Tracking via Instance Motion Estimation | [
"Seong Hyeon Park",
"Huiwon Jang",
"Byungwoo Jeon",
"Sukmin Yun",
"Paul Hongsuck Seo",
"Jinwoo Shin"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=eJG9uDqCY9 | @inproceedings{
zhang2024transcendence,
title={Transcendence: Generative Models Can Outperform The Experts That Train Them},
author={Edwin Zhang and Vincent Zhu and Naomi Saphra and Anat Kleiman and Benjamin L. Edelman and Milind Tambe and Sham M. Kakade and eran malach},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eJG9uDqCY9}
} | Generative models are trained with the simple objective of imitating the conditional probability distribution induced by the data they are trained on. Therefore, when trained on data generated by humans, we may not expect the artificial model to outperform the humans on their original objectives. In this work, we study the phenomenon of *transcendence*: when a generative model achieves capabilities that surpass the abilities of the experts generating its data. We demonstrate transcendence by training an autoregressive transformer to play chess from game transcripts, and show that the trained model can sometimes achieve better performance than all players in the dataset. We theoretically prove that transcendence is enabled by low-temperature sampling, and rigorously assess this experimentally. Finally, we discuss other sources of transcendence, laying the groundwork for future investigation of this phenomenon in a broader setting. | Transcendence: Generative Models Can Outperform The Experts That Train Them | [
"Edwin Zhang",
"Vincent Zhu",
"Naomi Saphra",
"Anat Kleiman",
"Benjamin L. Edelman",
"Milind Tambe",
"Sham M. Kakade",
"eran malach"
] | NeurIPS.cc/2024/Conference | 2406.11741 | [
""
] | https://huggingface.co/papers/2406.11741 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=eHzIwAhj06 | @inproceedings{
labonte2024the,
title={The Group Robustness is in the Details: Revisiting Finetuning under Spurious Correlations},
author={Tyler LaBonte and John Collins Hill and Xinchen zhang and Vidya Muthukumar and Abhishek Kumar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eHzIwAhj06}
} | Modern machine learning models are prone to over-reliance on spurious correlations, which can often lead to poor performance on minority groups. In this paper, we identify surprising and nuanced behavior of finetuned models on worst-group accuracy via comprehensive experiments on four well-established benchmarks across vision and language tasks. We first show that the commonly used class-balancing techniques of mini-batch upsampling and loss upweighting can induce a decrease in worst-group accuracy (WGA) with training epochs, leading to performance no better than without class-balancing. While in some scenarios, removing data to create a class-balanced subset is more effective, we show this depends on group structure and propose a mixture method which can outperform both techniques. Next, we show that scaling pretrained models is generally beneficial for worst-group accuracy, but only in conjunction with appropriate class-balancing. Finally, we identify spectral imbalance in finetuning features as a potential source of group disparities --- minority group covariance matrices incur a larger spectral norm than majority groups once conditioned on the classes. Our results show more nuanced interactions of modern finetuned models with group robustness than was previously known. Our code is available at https://github.com/tmlabonte/revisiting-finetuning. | The Group Robustness is in the Details: Revisiting Finetuning under Spurious Correlations | [
"Tyler LaBonte",
"John Collins Hill",
"Xinchen zhang",
"Vidya Muthukumar",
"Abhishek Kumar"
] | NeurIPS.cc/2024/Conference | 2407.13957 | [
"https://github.com/tmlabonte/revisiting-finetuning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eGJnB3tUgv | @inproceedings{
zeng2024fairnessaware,
title={Fairness-Aware Meta-Learning via Nash Bargaining},
author={Yi Zeng and Xuelin Yang and Li Chen and Cristian Canton Ferrer and Ming Jin and Michael Jordan and Ruoxi Jia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eGJnB3tUgv}
} | To address issues of group-level fairness in machine learning, it is natural to adjust model parameters based on specific fairness objectives over a sensitive-attributed validation set. Such an adjustment procedure can be cast within a meta-learning framework. However, naive integration of fairness goals via meta-learning can cause hypergradient conflicts for subgroups, resulting in unstable convergence and compromising model performance and fairness. To navigate this issue, we frame the resolution of hypergradient conflicts as a multi-player cooperative bargaining game. We introduce a two-stage meta-learning framework in which the first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model toward the Pareto front, and the second stage optimizes with respect to specific fairness goals.
Our method is supported by theoretical results, notably a proof of the NBS for gradient aggregation free from linear independence assumptions, a proof of Pareto improvement, and a proof of monotonic improvement in validation loss. We also show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks. | Fairness-Aware Meta-Learning via Nash Bargaining | [
"Yi Zeng",
"Xuelin Yang",
"Li Chen",
"Cristian Canton Ferrer",
"Ming Jin",
"Michael Jordan",
"Ruoxi Jia"
] | NeurIPS.cc/2024/Conference | 2406.07029 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eGIzeTmAtE | @inproceedings{
ma2024coljailbreak,
title={ColJailBreak: Collaborative Generation and Editing for Jailbreaking Text-to-Image Deep Generation},
author={Yizhuo Ma and Shanmin Pang and Qi Guo and Tianyu Wei and Qing Guo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eGIzeTmAtE}
} | The commercial text-to-image deep generation models (e.g. DALL·E) can produce high-quality images based on input language descriptions. These models incorporate a black-box safety filter to prevent the generation of unsafe or unethical content, such as violent, criminal, or hateful imagery. Recent jailbreaking methods generate adversarial prompts capable of bypassing safety filters and producing unsafe content, exposing vulnerabilities in influential commercial models. However, once these adversarial prompts are identified, the safety filter can be updated to prevent the generation of unsafe images. In this work, we propose an effective, simple, and difficult-to-detect jailbreaking solution: generating safe content initially with normal text prompts and then editing the generations to embed unsafe content. The intuition behind this idea is that the deep generation model cannot reject safe generation with normal text prompts, while the editing models focus on modifying the local regions of images and do not involve a safety strategy. However, implementing such a solution is non-trivial, and we need to overcome several challenges: how to automatically confirm the normal prompt to replace the unsafe prompts, and how to effectively perform editable replacement and naturally generate unsafe content. In this work, we propose the collaborative generation and editing for jailbreaking text-to-image deep generation (ColJailBreak), which comprises three key components: adaptive normal safe substitution, inpainting-driven injection of unsafe content, and contrastive language-image-guided collaborative optimization. We validate our method on three datasets and compare it to two baseline methods. Our method could generate unsafe content through two commercial deep generation models including GPT-4 and DALL·E 2. | ColJailBreak: Collaborative Generation and Editing for Jailbreaking Text-to-Image Deep Generation | [
"Yizhuo Ma",
"Shanmin Pang",
"Qi Guo",
"Tianyu Wei",
"Qing Guo"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eFrdRuyHR9 | @inproceedings{
folch2024transition,
title={Transition Constrained Bayesian Optimization via Markov Decision Processes},
author={Jose Pablo Folch and Calvin Tsay and Robert Matthew Lee and Behrang Shafei and Weronika Ormaniec and Andreas Krause and Mark van der Wilk and Ruth Misener and Mojmir Mutny},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eFrdRuyHR9}
} | Bayesian optimization is a methodology to optimize black-box functions. Traditionally, it focuses on the setting where you can arbitrarily query the search space. However, many real-life problems do not offer this flexibility; in particular, the search space of the next query may depend on previous ones. Example challenges arise in the physical sciences in the form of local movement constraints, required monotonicity in certain variables, and transitions influencing the accuracy of measurements. Altogether, such *transition constraints* necessitate a form of planning. This work extends classical Bayesian optimization via the framework of Markov Decision Processes. We iteratively solve a tractable linearization of our utility function using reinforcement learning to obtain a policy that plans ahead for the entire horizon. This is a parallel to the optimization of an *acquisition function in policy space*. The resulting policy is potentially history-dependent and non-Markovian. We showcase applications in chemical reactor optimization, informative path planning, machine calibration, and other synthetic examples. | Transition Constrained Bayesian Optimization via Markov Decision Processes | [
"Jose Pablo Folch",
"Calvin Tsay",
"Robert Matthew Lee",
"Behrang Shafei",
"Weronika Ormaniec",
"Andreas Krause",
"Mark van der Wilk",
"Ruth Misener",
"Mojmir Mutny"
] | NeurIPS.cc/2024/Conference | 2402.08406 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eFD9N5zdFC | @inproceedings{
ju2024accelerating,
title={Accelerating Nash Equilibrium Convergence in Monte Carlo Settings Through Counterfactual Value Based Fictitious Play},
author={Qi Ju and Falin Hei and Ting Feng and Dengbing Yi and Zhemei Fang and YunFeng Luo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eFD9N5zdFC}
} | Counterfactual Regret Minimization (CFR) and its variants are widely recognized as effective algorithms for solving extensive-form imperfect information games. Recently, many improvements have been focused on enhancing the convergence speed of the CFR algorithm. However, most of these variants are not applicable under Monte Carlo (MC) conditions, making them unsuitable for training in large-scale games. We introduce a new MC-based algorithm for solving extensive-form imperfect information games, called MCCFVFP (Monte Carlo Counterfactual Value-Based Fictitious Play). MCCFVFP combines CFR’s counterfactual value calculations with fictitious play’s best response strategy, leveraging the strengths of fictitious play to gain significant advantages in games with a high proportion of dominated strategies. Experimental results show that MCCFVFP achieved convergence speeds approximately 20\%$\sim$50\% faster than the most advanced MCCFR variants in games like poker and other test games. | Accelerating Nash Equilibrium Convergence in Monte Carlo Settings Through Counterfactual Value Based Fictitious Play | [
"Qi Ju",
"Falin Hei",
"Ting Feng",
"Dengbing Yi",
"Zhemei Fang",
"YunFeng Luo"
] | NeurIPS.cc/2024/Conference | 2309.03084 | [
"https://github.com/zealoter/cfvfp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eDNslSwQIj | @inproceedings{
wu2024neural,
title={Neural Assets: 3D-Aware Multi-Object Scene Synthesis with Image Diffusion Models},
author={Ziyi Wu and Yulia Rubanova and Rishabh Kabra and Drew A. Hudson and Igor Gilitschenski and Yusuf Aytar and Sjoerd van Steenkiste and Kelsey R Allen and Thomas Kipf},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eDNslSwQIj}
} | We address the problem of multi-object 3D pose control in image diffusion models. Instead of conditioning on a sequence of text tokens, we propose to use a set of per-object representations, *Neural Assets*, to control the 3D pose of individual objects in a scene. Neural Assets are obtained by pooling visual representations of objects from a reference image, such as a frame in a video, and are trained to reconstruct the respective objects in a different image, e.g., a later frame in the video. Importantly, we encode object visuals from the reference image while conditioning on object poses from the target frame, which enables learning disentangled appearance and position features. Combining visual and 3D pose representations in a sequence-of-tokens format allows us to keep the text-to-image interface of existing models, with Neural Assets in place of text tokens. By fine-tuning a pre-trained text-to-image diffusion model with this information, our approach enables fine-grained 3D pose and placement control of individual objects in a scene. We further demonstrate that Neural Assets can be transferred and recomposed across different scenes. Our model achieves state-of-the-art multi-object editing results on both synthetic 3D scene datasets, as well as two real-world video datasets (Objectron, Waymo Open). | Neural Assets: 3D-Aware Multi-Object Scene Synthesis with Image Diffusion Models | [
"Ziyi Wu",
"Yulia Rubanova",
"Rishabh Kabra",
"Drew A. Hudson",
"Igor Gilitschenski",
"Yusuf Aytar",
"Sjoerd van Steenkiste",
"Kelsey R Allen",
"Thomas Kipf"
] | NeurIPS.cc/2024/Conference | 2406.09292 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=eC5qdC4ZTQ | @inproceedings{
liu2024unlock,
title={Unlock the Intermittent Control Ability of Model Free Reinforcement Learning},
author={Jiashun Liu and Jianye HAO and Xiaotian Hao and Yi Ma and YAN ZHENG and Yujing Hu and Tangjie Lv},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eC5qdC4ZTQ}
} | Intermittent control problems are common in real world. The interactions between the decision maker and the executor can be discontinuous (intermittent) due to various types of interruptions, e.g. unstable communication channel. Due to intermittent interaction, agents are unable to acquire the state sent by the executor and cannot transmit actions to the executor within a period of time step, i.e. bidirectional blockage, which may lead to inefficiencies of reinforcement learning policies and prevent the executors from completing the task. Such problem is not well studied in the RL community. In this paper, we model Intermittent control problem as an Intermittent Control Markov Decision Process, i.e agents are expected to generate action sequences corresponding to the unavailable states and transmit them before disabling interactions to ensure the smooth and effective motion of executors. However, directly generating multiple future actions in the original action space has unnatural motion issue and exploration difficulty. We propose **M**ulti-step **A**ction **R**epre**S**entation (**MARS**), which encodes a sequence of actions from the original action space to a compact and decodable latent space. Then based on the latent action sequence representation, the mainstream RL methods can be easily optimized to learn a smooth and efficient motion policy. Extensive experiments on simulation tasks and real-world robotic grasping tasks show that MARS significantly improves the learning efficiency and final performances compared with existing baselines. | Unlock the Intermittent Control Ability of Model Free Reinforcement Learning | [
"Jiashun Liu",
"Jianye HAO",
"Xiaotian Hao",
"Yi Ma",
"YAN ZHENG",
"Yujing Hu",
"Tangjie Lv"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eAqcVZx30k | @inproceedings{
liu2024is,
title={Is the {MMI} Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization},
author={Wei Liu and Zhiying Deng and Zhongyu Niu and Jun Wang and Haozhao Wang and YuanKai Zhang and Ruixuan Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=eAqcVZx30k}
} | An important line of research in the field of explainability is to extract a small subset of crucial rationales from the full input. The most widely used criterion for rationale extraction is the maximum mutual information (MMI) criterion. However, in certain datasets, there are spurious features non-causally correlated with the label and also get high mutual information, complicating the loss landscape of MMI. Although some penalty-based methods have been developed to penalize the spurious features (e.g., invariance penalty, intervention penalty, etc) to help MMI work better, these are merely remedial measures.
In the optimization objectives of these methods, spurious features are still distinguished from plain noise, which hinders the discovery of causal rationales.
This paper aims to develop a new criterion that treats spurious features as plain noise, allowing the model to work on datasets rich in spurious features as if it were working on clean datasets, thereby making rationale extraction easier.
We theoretically observe that removing either plain noise or spurious features from the input does not alter the conditional distribution of the remaining components relative to the task label. However, significant changes in the conditional distribution occur only when causal features are eliminated.
Based on this discovery, the paper proposes a criterion for \textbf{M}aximizing the \textbf{R}emaining \textbf{D}iscrepancy (MRD). Experiments on six widely used datasets show that our MRD criterion improves rationale quality (measured by the overlap with human-annotated rationales) by up to $10.4\%$ as compared to several recent competitive MMI variants. Code: \url{https://github.com/jugechengzi/Rationalization-MRD}. | Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization | [
"Wei Liu",
"Zhiying Deng",
"Zhongyu Niu",
"Jun Wang",
"Haozhao Wang",
"YuanKai Zhang",
"Ruixuan Li"
] | NeurIPS.cc/2024/Conference | 2410.06003 | [
"https://github.com/jugechengzi/rationalization-mrd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=e6WrwIvgzX | @inproceedings{
aggarwal2024automix,
title={AutoMix: Automatically Mixing Language Models},
author={Pranjal Aggarwal and Aman Madaan and Ankit Anand and Srividya Pranavi Potharaju and Swaroop Mishra and Pei Zhou and Aditya Gupta and Dheeraj Rajagopal and Karthik Kappaganthu and Yiming Yang and Shyam Upadhyay and Manaal Faruqui and Mausam .},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e6WrwIvgzX}
} | Large language models (LLMs) are now available from cloud API providers in various sizes and configurations. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix are two key technical contributions. First, it has a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring extensive training. Second, given that self-verification can be noisy, it employs a POMDP based router that can effectively select an appropriately sized model, based on answer confidence. Experiments across five language models and five challenging datasets show that Automix consistently surpasses strong baselines, reducing computational cost by over 50\% for comparable performance. | AutoMix: Automatically Mixing Language Models | [
"Pranjal Aggarwal",
"Aman Madaan",
"Ankit Anand",
"Srividya Pranavi Potharaju",
"Swaroop Mishra",
"Pei Zhou",
"Aditya Gupta",
"Dheeraj Rajagopal",
"Karthik Kappaganthu",
"Yiming Yang",
"Shyam Upadhyay",
"Manaal Faruqui",
"Mausam ."
] | NeurIPS.cc/2024/Conference | 2310.12963 | [
"https://github.com/automix-llm/automix"
] | https://huggingface.co/papers/2310.12963 | 5 | 14 | 2 | 13 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=e6KrSouGHJ | @inproceedings{
zhang2024attackresilient,
title={Attack-Resilient Image Watermarking Using Stable Diffusion},
author={Lijun Zhang and Xiao Liu and Antoni Viros i Martin and Cindy Xiong Bearfield and Yuriy Brun and Hui Guan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e6KrSouGHJ}
} | Watermarking images is critical for tracking image provenance and proving ownership. With the advent of generative models, such as stable diffusion, that can create fake but realistic images, watermarking has become particularly important to make human-created images reliably identifiable. Unfortunately, the very same stable diffusion technology can remove watermarks injected using existing methods.
To address this problem, we present ZoDiac, which uses a pre-trained stable diffusion model to inject a watermark into the trainable latent space, resulting in watermarks that can be reliably detected in the latent vector even when attacked. We evaluate ZoDiac on three benchmarks, MS-COCO, DiffusionDB, and WikiArt, and find that ZoDiac is robust against state-of-the-art watermark attacks, with a watermark detection rate above 98% and a false positive rate below 6.4%, outperforming state-of-the-art watermarking methods. We hypothesize that the reciprocating denoising process in diffusion models may inherently enhance the robustness of the watermark when faced with strong attacks and validate the hypothesis. Our research demonstrates that stable diffusion is a promising approach to robust watermarking, able to withstand even stable-diffusion-based attack methods. ZoDiac is open-sourced and available at https://github.com/zhanglijun95/ZoDiac. | Attack-Resilient Image Watermarking Using Stable Diffusion | [
"Lijun Zhang",
"Xiao Liu",
"Antoni Viros i Martin",
"Cindy Xiong Bearfield",
"Yuriy Brun",
"Hui Guan"
] | NeurIPS.cc/2024/Conference | 2401.04247 | [
"https://github.com/zhanglijun95/ZoDiac"
] | https://huggingface.co/papers/2401.04247 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=e5icsXBD8Q | @inproceedings{
liu2024large,
title={Large Language Model Unlearning via Embedding-Corrupted Prompts},
author={Chris Yuhao Liu and Yaxuan Wang and Jeffrey Flanigan and Yang Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e5icsXBD8Q}
} | Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important for ensuring alignment and thus safe use. However, accurately and efficiently unlearning knowledge from an LLM remains challenging due to the potential collateral damage caused by the fuzzy boundary between retention and forgetting, and the large computational requirements for optimization across state-of-the-art models with hundreds of billions of parameters. In this work, we present \textbf{Embedding-COrrupted (ECO) Prompts}, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency. Instead of relying on the LLM itself to unlearn, we enforce an unlearned state during inference by employing a prompt classifier to identify and safeguard prompts to forget. We learn corruptions added to prompt embeddings via zeroth order optimization toward the unlearning objective offline and corrupt prompts flagged by the classifier during inference. We find that these embedding-corrupted prompts not only lead to desirable outputs that satisfy the unlearning objective but also closely approximate the output from a model that has never been trained on the data intended for forgetting. Through extensive experiments on unlearning, we demonstrate the superiority of our method in achieving promising unlearning at \textit{nearly zero side effects} in general domains and domains closely related to the unlearned ones. Additionally, we highlight the scalability of our method to 100 LLMs, ranging from 0.5B to 236B parameters, incurring no additional cost as the number of parameters increases. We have made our code publicly available at \url{https://github.com/chrisliu298/llm-unlearn-eco}. | Large Language Model Unlearning via Embedding-Corrupted Prompts | [
"Chris Yuhao Liu",
"Yaxuan Wang",
"Jeffrey Flanigan",
"Yang Liu"
] | NeurIPS.cc/2024/Conference | 2406.07933 | [
"https://github.com/chrisliu298/llm-unlearn-eco"
] | https://huggingface.co/papers/2406.07933 | 2 | 7 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=e5Mv7iWfVW | @inproceedings{
chen2024what,
title={What Rotary Position Embedding Can Tell Us: Identifying Query and Key Weights Corresponding to Basic Syntactic or High-level Semantic Information},
author={Yiting Chen and Junchi Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e5Mv7iWfVW}
} | Transformer-based large language models (LLMs) have successfully handled various tasks. As one fundamental module in Transformers, position encoding encodes the positional information of tokens in a sequence. Specifically, rotary position embedding (RoPE), one of the most widely used techniques, encodes the positional information by dividing the query or key value with $d$ elements into $d/2$ pairs and rotating the 2d vectors corresponding to each pair of elements. Therefore, the direction of each pair and the position-related rotation jointly determine the attention score. In this paper, we show that the direction of the 2d pair is largely affected by the angle between the corresponding weight vector pair. We theoretically show that non-orthogonal weight vector pairs lead to great attention on tokens at a certain relative position and are less sensitive to the input which may correspond to basic syntactic information. Meanwhile, the orthogonal weight vector pairs are more flexible regarding the relative position, which may correspond to high-level syntactic information. Empirical evidence supports the hypothesis that shallow layers of LLMs focus more on local syntax and deep layers focus more on high-level semantics. Furthermore, we show that LLMs fine-tuning mainly changes the pairs of weight vectors that are nearly orthogonal, i.e., the weight corresponding to high-level semantics, which enables the reduction of the number of trainable parameters during fine-tuning without sacrificing performance. We propose a method namely Angle-based Weight Selection (AWS) to reduce the fine-tuning overhead and verify the effectiveness of the proposed method on widely used Alpaca fine-tuned Llama-2. | What Rotary Position Embedding Can Tell Us: Identifying Query and Key Weights Corresponding to Basic Syntactic or High-level Semantic Information | [
"Yiting Chen",
"Junchi Yan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=e57B7BfA2B | @inproceedings{
wang2024exploring,
title={Exploring {DCN}-like architecture for fast image generation with arbitrary resolution},
author={Shuai Wang and Zexian Li and Tianhui Song and Xubin Li and Tiezheng Ge and Bo Zheng and Limin Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e57B7BfA2B}
} | Arbitrary-resolution image generation still remains a challenging task in AIGC, as it requires handling varying resolutions and aspect ratios while maintaining high visual quality. Existing transformer-based diffusion methods suffer from quadratic computation cost and limited resolution extrapolation capabilities, making them less effective for this task. In this paper, we propose FlowDCN, a purely convolution-based generative model with linear time and memory complexity, that can efficiently generate high-quality images at arbitrary resolutions. Equipped with a new design of learnable group-wise deformable convolution block, our FlowDCN yields higher flexibility and capability to handle different resolutions with a single model.
FlowDCN achieves the state-of-the-art 4.30 sFID on $256\times256$ ImageNet Benchmark and comparable resolution extrapolation results, surpassing transformer-based counterparts in terms of convergence speed (only $\frac{1}{5}$ images), visual quality, parameters ($8\%$ reduction) and FLOPs ($20\%$ reduction). We believe FlowDCN offers a promising solution to scalable and flexible image synthesis. | Exploring DCN-like architecture for fast image generation with arbitrary resolution | [
"Shuai Wang",
"Zexian Li",
"Tianhui Song",
"Xubin Li",
"Tiezheng Ge",
"Bo Zheng",
"Limin Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=e49QqJxCwq | @inproceedings{
zuo2024plip,
title={{PLIP}: Language-Image Pre-training for Person Representation Learning},
author={Jialong Zuo and Jiahao Hong and Feng Zhang and Changqian Yu and Hanyu Zhou and Changxin Gao and Nong Sang and Jingdong Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e49QqJxCwq}
} | Language-image pre-training is an effective technique for learning powerful representations in general domains. However, when directly turning to person representation learning, these general pre-training methods suffer from unsatisfactory performance. The reason is that they neglect critical person-related characteristics, i.e., fine-grained attributes and identities. To address this issue, we propose a novel language-image pre-training framework for person representation learning, termed PLIP. Specifically, we elaborately design three pretext tasks: 1) Text-guided Image Colorization, aims to establish the correspondence between the person-related image regions and the fine-grained color-part textual phrases. 2) Image-guided Attributes Prediction, aims to mine fine-grained attribute information of the person body in the image; and 3) Identity-based Vision-Language Contrast, aims to correlate the cross-modal representations at the identity level rather than the instance level. Moreover, to implement our pre-train framework, we construct a large-scale person dataset with image-text pairs named SYNTH-PEDES by automatically generating textual annotations. We pre-train PLIP on SYNTH-PEDES and evaluate our models by spanning downstream person-centric tasks. PLIP not only significantly improves existing methods on all these tasks, but also shows great ability in the zero-shot and domain generalization settings. The code, dataset and weight will be made publicly available. | PLIP: Language-Image Pre-training for Person Representation Learning | [
"Jialong Zuo",
"Jiahao Hong",
"Feng Zhang",
"Changqian Yu",
"Hanyu Zhou",
"Changxin Gao",
"Nong Sang",
"Jingdong Wang"
] | NeurIPS.cc/2024/Conference | 2305.08386 | [
"https://github.com/zplusdragon/plip"
] | https://huggingface.co/papers/2305.08386 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=e397soEZh8 | @inproceedings{
kogkalidis2024learning,
title={Learning Structure-Aware Representations of Dependent Types},
author={Konstantinos Kogkalidis and Orestis Melkonian and Jean-Philippe Bernardy},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e397soEZh8}
} | Agda is a dependently-typed programming language and a proof assistant, pivotal in proof formalization and programming language theory.
This paper extends the Agda ecosystem into machine learning territory, and, vice versa, makes Agda-related resources available to machine learning practitioners.
We introduce and release a novel dataset of Agda program-proofs that is elaborate and extensive enough to support various machine learning applications -- the first of its kind.
Leveraging the dataset's ultra-high resolution, which details proof states at the sub-type level, we propose a novel neural architecture targeted at faithfully representing dependently-typed programs on the basis of structural rather than nominal principles.
We instantiate and evaluate our architecture in a premise selection setup, where it achieves promising initial results, surpassing strong baselines. | Learning Structure-Aware Representations of Dependent Types | [
"Konstantinos Kogkalidis",
"Orestis Melkonian",
"Jean-Philippe Bernardy"
] | NeurIPS.cc/2024/Conference | 2402.02104 | [
"https://github.com/konstantinosKokos/neural-agda"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=e2R4WNHHGQ | @inproceedings{
yazdani-jahromi2024fair,
title={Fair Bilevel Neural Network (FairBi{NN}): On Balancing fairness and accuracy via Stackelberg Equilibrium},
author={Mehdi Yazdani-Jahromi and Ali Khodabandeh Yalabadi and Amirarsalan Rajabi and Aida Tayebi and Ivan Garibay and Ozlem Garibay},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e2R4WNHHGQ}
} | The persistent challenge of bias in machine learning models necessitates robust solutions to ensure parity and equal treatment across diverse groups, particularly in classification tasks. Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness. To address this, we propose a novel methodology grounded in bilevel optimization principles. Our deep learning-based approach concurrently optimizes for both accuracy and fairness objectives, and under certain assumptions, achieving proven Pareto optimal solutions while mitigating bias in the trained model. Theoretical analysis indicates that the upper bound on the loss incurred by this method is less than or equal to the loss of the Lagrangian approach, which involves adding a regularization term to the loss function. We demonstrate the efficacy of our model primarily on tabular datasets such as UCI Adult and Heritage Health. When benchmarked against state-of-the-art fairness methods, our model exhibits superior performance, advancing fairness-aware machine learning solutions and bridging the accuracy-fairness gap. The implementation of FairBiNN is available on https://github.com/yazdanimehdi/FairBiNN. | Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium | [
"Mehdi Yazdani-Jahromi",
"Ali Khodabandeh Yalabadi",
"Amirarsalan Rajabi",
"Aida Tayebi",
"Ivan Garibay",
"Ozlem Garibay"
] | NeurIPS.cc/2024/Conference | 2410.16432 | [
"https://github.com/yazdanimehdi/fairbinn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=e2INndPINB | @inproceedings{
kim2024rethinking,
title={Rethinking Reconstruction-based Graph-Level Anomaly Detection: Limitations and a Simple Remedy},
author={Sunwoo Kim and Soo Yong Lee and Fanchen Bu and Shinhwan Kang and Kyungho Kim and Jaemin Yoo and Kijung Shin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e2INndPINB}
} | Graph autoencoders (Graph-AEs) learn representations of given graphs by aiming to accurately reconstruct them. A notable application of Graph-AEs is graph-level anomaly detection (GLAD), whose objective is to identify graphs with anomalous topological structures and/or node features compared to the majority of the graph population. Graph-AEs for GLAD regard a graph with a high mean reconstruction error (i.e. mean of errors from all node pairs and/or nodes) as anomalies. Namely, the methods rest on the assumption that they would better reconstruct graphs with similar characteristics to the majority. We, however, report non-trivial counter-examples, a phenomenon we call reconstruction flip, and highlight the limitations of the existing Graph-AE-based GLAD methods. Specifically, we empirically and theoretically investigate when this assumption holds and when it fails. Through our analyses, we further argue that, while the reconstruction errors for a given graph are effective features for GLAD, leveraging the multifaceted summaries of the reconstruction errors, beyond just mean, can further strengthen the features. Thus, we propose a novel and simple GLAD method, named MUSE. The key innovation of MUSE involves taking multifaceted summaries of reconstruction errors as graph features for GLAD. This surprisingly simple method obtains SOTA performance in GLAD, performing best overall among 14 methods across 10 datasets. | Rethinking Reconstruction-based Graph-Level Anomaly Detection: Limitations and a Simple Remedy | [
"Sunwoo Kim",
"Soo Yong Lee",
"Fanchen Bu",
"Shinhwan Kang",
"Kyungho Kim",
"Jaemin Yoo",
"Kijung Shin"
] | NeurIPS.cc/2024/Conference | 2410.20366 | [
"https://github.com/kswoo97/GLAD_MUSE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=e0SQ6wsHjv | @inproceedings{
zhao2024dynamic,
title={Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation},
author={Wangbo Zhao and Jiasheng Tang and Yizeng Han and Yibing Song and Kai Wang and Gao Huang and Fan Wang and Yang You},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=e0SQ6wsHjv}
} | Existing parameter-efficient fine-tuning (PEFT) methods have achieved significant success on vision transformers (ViTs) adaptation by improving parameter efficiency. However, the exploration of enhancing inference efficiency during adaptation remains underexplored. This limits the broader application of pre-trained ViT models, especially when the model is computationally extensive. In this paper, we propose Dynamic Tuning (DyT), a novel approach to improve both parameter and inference efficiency for ViT adaptation. Specifically, besides using the lightweight adapter modules, we propose a token dispatcher to distinguish informative tokens from less important ones, allowing the latter to dynamically skip the original block, thereby reducing the redundant computation during inference. Additionally, we explore multiple design variants to find the best practice of DyT. Finally, inspired by the mixture-of-experts (MoE) mechanism, we introduce an enhanced adapter to further boost the adaptation performance. We validate DyT across various tasks, including image/video recognition and semantic segmentation. For instance, DyT achieves superior performance compared to existing PEFT methods while evoking only 71% of their FLOPs on the VTAB-1K benchmark. | Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation | [
"Wangbo Zhao",
"Jiasheng Tang",
"Yizeng Han",
"Yibing Song",
"Kai Wang",
"Gao Huang",
"Fan Wang",
"Yang You"
] | NeurIPS.cc/2024/Conference | 2403.11808 | [
"https://github.com/nus-hpc-ai-lab/dynamic-tuning"
] | https://huggingface.co/papers/2403.11808 | 1 | 0 | 1 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=dz6ex9Ee0Q | @inproceedings{
hou2024robust,
title={Robust Graph Neural Networks via Unbiased Aggregation},
author={Zhichao Hou and Ruiqi Feng and Tyler Derr and Xiaorui Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dz6ex9Ee0Q}
} | The adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks despite the existence of numerous defenses.
In this work, we delve into the robustness analysis of representative robust GNNs and provide a unified robust estimation point of view to
understand their robustness and limitations.
Our novel analysis of estimation bias motivates the design of a
robust and unbiased graph signal estimator.
We then develop an efficient Quasi-Newton Iterative Reweighted Least Squares algorithm to solve the estimation problem, which is unfolded as robust unbiased aggregation layers in GNNs with theoretical guarantees.
Our comprehensive experiments confirm the strong robustness of our proposed model under various scenarios, and the ablation study provides a deep understanding of its advantages. | Robust Graph Neural Networks via Unbiased Aggregation | [
"Zhichao Hou",
"Ruiqi Feng",
"Tyler Derr",
"Xiaorui Liu"
] | NeurIPS.cc/2024/Conference | 2311.14934 | [
"https://github.com/chris-hzc/rung"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dyZ8GJZjtX | @inproceedings{
wu2024multihead,
title={Multi-Head Mixture-of-Experts},
author={Xun Wu and Shaohan Huang and Wenhui Wang and Shuming Ma and Li Dong and Furu Wei},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dyZ8GJZjtX}
} | Sparse Mixtures of Experts (SMoE) scales model capacity without significant increases in computational costs. However, it exhibits the low expert activation issue, i.e., only a small subset of experts are activated for optimization, leading to suboptimal performance and limiting its effectiveness in learning a larger number of experts in complex tasks. In this paper, we propose Multi-Head Mixture-of-Experts (MH-MoE). MH-MoE split each input token into multiple sub-tokens, then these sub-tokens are assigned to and processed by a diverse set of experts in parallel, and seamlessly reintegrated into the original token form. The above operations enables MH-MoE to significantly enhance expert activation while collectively attend to information from various representation spaces within different experts to deepen context understanding. Besides, it's worth noting that our MH-MoE is straightforward to implement and decouples from other SMoE frameworks, making it easy to integrate with these frameworks for enhanced performance. Extensive experimental results across different parameter scales (300M to 7B) and three pre-training tasks—English-focused language modeling, multi-lingual language modeling and masked multi-modality modeling—along with multiple downstream validation tasks, demonstrate the effectiveness of MH-MoE. | Multi-Head Mixture-of-Experts | [
"Xun Wu",
"Shaohan Huang",
"Wenhui Wang",
"Shuming Ma",
"Li Dong",
"Furu Wei"
] | NeurIPS.cc/2024/Conference | 2404.15045 | [
"https://github.com/yushuiwx/mh-moe"
] | https://huggingface.co/papers/2404.15045 | 2 | 59 | 2 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=dxyNVEBQMp | @inproceedings{
kang2024introducing,
title={Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting},
author={Bong Gyun Kang and Dongjun Lee and HyunGi Kim and Dohyun Chung and Sungroh Yoon},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dxyNVEBQMp}
} | Sequence modeling faces challenges in capturing long-range dependencies across diverse tasks. Recent linear and transformer-based forecasters have shown superior performance in time series forecasting. However, they are constrained by their inherent inability to effectively address long-range dependencies in time series data, primarily due to using fixed-size inputs for prediction. Furthermore, they typically sacrifice essential temporal correlation among consecutive training samples by shuffling them into mini-batches. To overcome these limitations, we introduce a fast and effective Spectral Attention mechanism, which preserves temporal correlations among samples and facilitates the handling of long-range information while maintaining the base model structure. Spectral Attention preserves long-period trends through a low-pass filter and facilitates gradient to flow between samples. Spectral Attention can be seamlessly integrated into most sequence models, allowing models with fixed-sized look-back windows to capture long-range dependencies over thousands of steps. Through extensive experiments on 11 real-world time series datasets using 7 recent forecasting models, we consistently demonstrate the efficacy of our Spectral Attention mechanism, achieving state-of-the-art results. | Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting | [
"Bong Gyun Kang",
"Dongjun Lee",
"HyunGi Kim",
"Dohyun Chung",
"Sungroh Yoon"
] | NeurIPS.cc/2024/Conference | 2410.20772 | [
"https://github.com/djlee1208/bsa_2024"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dxxj4S06YL | @inproceedings{
balkanski2024fair,
title={Fair Secretaries with Unfair Predictions},
author={Eric Balkanski and Will Ma and Andreas Maggiori},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dxxj4S06YL}
} | Algorithms with predictions is a recent framework for decision-making under uncertainty that leverages the power of machine-learned predictions without making any assumption about their quality. The goal in this framework is for algorithms to achieve an improved performance when the predictions are accurate while maintaining acceptable guarantees when the predictions are erroneous. A serious concern with algorithms that use predictions is that these predictions can be biased and, as a result, cause the algorithm to make decisions that are deemed unfair. We show that this concern manifests itself in the classical secretary problem in the learning-augmented setting---the state-of-the-art algorithm can have zero probability of accepting the best candidate, which we deem unfair, despite promising to accept a candidate whose expected value is at least $\max\{\Omega (1) , 1 - O(\varepsilon)\}$ times the optimal value, where $\varepsilon$ is the prediction error.
We show how to preserve this promise while also guaranteeing to accept the best candidate with probability $\Omega(1)$. Our algorithm and analysis are based on a new ``pegging'' idea that diverges from existing works and simplifies/unifies some of their results. Finally, we extend to the $k$-secretary problem and complement our theoretical analysis with experiments. | Fair Secretaries with Unfair Predictions | [
"Eric Balkanski",
"Will Ma",
"Andreas Maggiori"
] | NeurIPS.cc/2024/Conference | 2411.09854 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dxwIaCVkWU | @inproceedings{
sennesh2024divideandconquer,
title={Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm},
author={Eli Zachary Sennesh and Hao Wu and Tommaso Salvatori},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dxwIaCVkWU}
} | Unexpected stimuli induce "error" or "surprise" signals in the brain. The theory of predictive coding promises to explain these observations in terms of Bayesian inference by suggesting that the cortex implements variational inference in a probabilistic graphical model. However, when applied to machine learning tasks, this family of algorithms has yet to perform on par with other variational approaches in high-dimensional, structured inference problems. To address this, we introduce a novel predictive coding algorithm for structured generative models, that we call divide-and-conquer predictive coding (DCPC); it differs from other formulations of predictive coding, as it respects the correlation structure of the generative model and provably performs maximum-likelihood updates of model parameters, all without sacrificing biological plausibility. Empirically, DCPC achieves better numerical performance than competing algorithms and provides accurate inference in a number of problems not previously addressed with predictive coding. We provide an open implementation of DCPC in Pyro on Github. | Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm | [
"Eli Zachary Sennesh",
"Hao Wu",
"Tommaso Salvatori"
] | NeurIPS.cc/2024/Conference | 2408.05834 | [
"https://github.com/esennesh/ppc_experiments"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dwYekpbmYG | @inproceedings{
huang2024free,
title={Free Lunch in Pathology Foundation Model: Task-specific Model Adaptation with Concept-Guided Feature Enhancement},
author={Yanyan Huang and Weiqin Zhao and Yihang Chen and Yu Fu and Lequan Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dwYekpbmYG}
} | Whole slide image (WSI) analysis is gaining prominence within the medical imaging field. Recent advances in pathology foundation models have shown the potential to extract powerful feature representations from WSIs for downstream tasks. However, these foundation models are usually designed for general-purpose pathology image analysis and may not be optimal for specific downstream tasks or cancer types. In this work, we present Concept Anchor-guided Task-specific Feature Enhancement (CATE), an adaptable paradigm that can boost the expressivity and discriminativeness of pathology foundation models for specific downstream tasks. Based on a set of task-specific concepts derived from the pathology vision-language model with expert-designed prompts, we introduce two interconnected modules to dynamically calibrate the generic image features extracted by foundation models for certain tasks or cancer types. Specifically, we design a Concept-guided Information Bottleneck module to enhance task-relevant characteristics by maximizing the mutual information between image features and concept anchors while suppressing superfluous information. Moreover, a Concept-Feature Interference module is proposed to utilize the similarity between calibrated features and concept anchors to further generate discriminative task-specific features. The extensive experiments on public WSI datasets demonstrate that CATE significantly enhances the performance and generalizability of MIL models. Additionally, heatmap and umap visualization results also reveal the effectiveness and interpretability of CATE. | Free Lunch in Pathology Foundation Model: Task-specific Model Adaptation with Concept-Guided Feature Enhancement | [
"Yanyan Huang",
"Weiqin Zhao",
"Yihang Chen",
"Yu Fu",
"Lequan Yu"
] | NeurIPS.cc/2024/Conference | 2411.09894 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dtvJF1Vy2i | @inproceedings{
lauren{\c{c}}on2024what,
title={What matters when building vision-language models?},
author={Hugo Lauren{\c{c}}on and Leo Tronchon and Matthieu Cord and Victor Sanh},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dtvJF1Vy2i}
} | The growing interest in vision-language models (VLMs) has been driven by improvements in large language models and vision transformers. Despite the abundance of literature on this subject, we observe that critical decisions regarding the design of VLMs are often not justified. We argue that these unsupported decisions impede progress in the field by making it difficult to identify which choices improve model performance. To address this issue, we conduct extensive experiments around pre-trained models, architecture choice, data, and training methods. Our consolidation of findings includes the development of Idefics2, an efficient foundational VLM of 8 billion parameters. Idefics2 achieves state-of-the-art performance within its size category across various multimodal benchmarks, and is often on par with models four times its size. We release the model (base, instructed, and chat) along with the datasets created for its training. | What matters when building vision-language models? | [
"Hugo Laurençon",
"Leo Tronchon",
"Matthieu Cord",
"Victor Sanh"
] | NeurIPS.cc/2024/Conference | 2405.02246 | [
""
] | https://huggingface.co/papers/2405.02246 | 3 | 99 | 3 | 4 | [
"HuggingFaceM4/idefics2-8b",
"HuggingFaceM4/Idefics3-8B-Llama3",
"HuggingFaceM4/idefics2-8b-chatty",
"HuggingFaceM4/idefics2-8b-base",
"turing-motors/Heron-Idefics2-8B-v0.1",
"Reverb/Idefics2-8b-docVQA-finetuned",
"Trelis/idefics2-8b-chatty-bf16",
"huz-relay/idefics2-8b-ocr",
"peterpeter8585/ai2"
] | [
"HuggingFaceM4/the_cauldron",
"turing-motors/Cauldron-JA"
] | [
"HuggingFaceM4/idefics2_playground",
"HuggingFaceM4/idefics-8b",
"HuggingFaceM4/idefics3",
"thobuiq/GPT-4o",
"TIGER-Lab/MEGA-Bench",
"EPFL-VILAB/ViPer",
"eltorio/IDEFICS3_ROCO",
"dwb2023/omniscience",
"m-ric/rate_coolness",
"AdrienB134/rag_colpali_idefics3",
"Saee/vQA-exploration",
"awacke1/idefics_and_chatty",
"arad1367/Marketing_Vision_HuggingFaceM4_idefics3",
"dwb2023/model_explorer2",
"acecalisto3/IDEfix",
"pettah/PETTAHAI-Chatgpt4o-Demo",
"AchilleDev/perpetron",
"d-delaurier/Judge-vLLM",
"Rooni/OpenGPT-4o",
"Cesarcr/GPT-4o",
"emoud/IDEFICS3_ROCO",
"vaugheu/Idefics2_8B_Chatty",
"fardinkai/GPT-4o",
"hexgrad/IDEFICS3_ROCO_ZeroGPU",
"sherrybabe1978/OpenGPT-4o",
"dwb2023/model_explorer4",
"mcouaillac/IDEFICS3_ROCO_ZeroGPU",
"HuggingFaceH4/idefics2-8b-playground",
"lillab-demos/cogen",
"lillab-demos/respect",
"marc-mao/idefics2_playground",
"IncinerateZ/chatbot",
"cocktailpeanut/idefics-8b",
"Zaherrr/KG_transform",
"acecalisto3/IDE-play",
"dawood/idefics2_playground",
"jkorstad/idefics3",
"pettah/pettahaiGPT40",
"Rahulhuggingface/AAnh",
"fatima3597/AI-Podcast-Creator",
"taronsarkisyan/GPT-4o",
"ignitariumcloud/idefics2",
"jlecocq/radiology-test",
"vijaykumar85601/idefics2_playground",
"Stable-Human/idefics2_playground",
"cmaire/IDEFICS3_ROCO_ZeroGPU",
"arptakash/GPT-4o",
"LuxOAI/LUXX",
"cmaire/IDEFICS3_ROCO",
"Tamqeen/Chatbot-Llama",
"awacke1/idefics2_playground-demo",
"NekonekoID/GPT-4o",
"ggilabert/idefics2_playground",
"minhdang/OpenGPT-4o",
"vakilrathod67/Owngpt",
"LuxOAI/OpenGPT-4o",
"figh8back/fynd-idefics2-bb",
"ThinkAI-Morocco/KYA_idefics2_yalla",
"amanavinash/GPT-4o",
"sapan3012/OpenGPT-4o",
"jcheng5/multimodal",
"Zafer01/OpenGPT4",
"MasterDee/OpenGPH-4o",
"xi0v/Omni4All",
"Mandeep20/GPT-4o",
"mebinjo/OpenGPT-4o",
"sumitmeharwade/visionmodel",
"bala0o8o0/hexoticlabs-OpenGPT-4o",
"tnzly/TAI.o",
"AnViFedotov/OpenGPT-4o",
"iiced/OpenGPT-4o",
"Losthack777/mohamedsalem",
"Losthack777/OpenGPT-4o",
"Jayanath1987/JBL-OpenGPT-4o",
"Satyam-Singh/OpenAi_GPT_4-o",
"jihadzakki/idefics2_deploy",
"Jayanath1987/OpenGPT-4o",
"Anon0777/chat-app-model-hf",
"anjanprasad112/OpenGPT",
"oscarwang2/OPENCHAT",
"ka1kuk/fastapi-demo",
"Kalbe-x-Bangkit/medVQA-tester",
"Kalbe-x-Bangkit/Virtual_Question_Answering_Kalbe",
"dineth554/novafulldemov2",
"Kalbe-x-Bangkit/IDEFICS2-8B-MedicalVQA",
"Mareks1993/testing123",
"KalbeDigitalLab/IDEFICS2-8B-MedicalVQA",
"Tech-Meld/Hajax_MultiModal",
"rodrigomasini/dbkjff",
"jayyd/idefics2_playground",
"YanMASTER/OpenGPT-4o",
"Abhinay45/OpenGPT-4o",
"HuggingFaceH4/idefics2-8b-vdpoed-playground",
"Almaatla/OpenGPT-4o",
"sonar2377/OpenGPT-4o",
"saicharan1234/idefics2_playground",
"dorosara/OpenGPT-4o",
"Vinfinity/OpenGPT-4o",
"raghu8096/OpenGPT-4o",
"peterpeter8585/GPT4"
] | [
"HuggingFaceM4/idefics2-8b",
"HuggingFaceM4/Idefics3-8B-Llama3",
"HuggingFaceM4/idefics2-8b-chatty",
"HuggingFaceM4/idefics2-8b-base",
"turing-motors/Heron-Idefics2-8B-v0.1",
"Reverb/Idefics2-8b-docVQA-finetuned",
"Trelis/idefics2-8b-chatty-bf16",
"huz-relay/idefics2-8b-ocr",
"peterpeter8585/ai2"
] | [
"HuggingFaceM4/the_cauldron",
"turing-motors/Cauldron-JA"
] | [
"HuggingFaceM4/idefics2_playground",
"HuggingFaceM4/idefics-8b",
"HuggingFaceM4/idefics3",
"thobuiq/GPT-4o",
"TIGER-Lab/MEGA-Bench",
"EPFL-VILAB/ViPer",
"eltorio/IDEFICS3_ROCO",
"dwb2023/omniscience",
"m-ric/rate_coolness",
"AdrienB134/rag_colpali_idefics3",
"Saee/vQA-exploration",
"awacke1/idefics_and_chatty",
"arad1367/Marketing_Vision_HuggingFaceM4_idefics3",
"dwb2023/model_explorer2",
"acecalisto3/IDEfix",
"pettah/PETTAHAI-Chatgpt4o-Demo",
"AchilleDev/perpetron",
"d-delaurier/Judge-vLLM",
"Rooni/OpenGPT-4o",
"Cesarcr/GPT-4o",
"emoud/IDEFICS3_ROCO",
"vaugheu/Idefics2_8B_Chatty",
"fardinkai/GPT-4o",
"hexgrad/IDEFICS3_ROCO_ZeroGPU",
"sherrybabe1978/OpenGPT-4o",
"dwb2023/model_explorer4",
"mcouaillac/IDEFICS3_ROCO_ZeroGPU",
"HuggingFaceH4/idefics2-8b-playground",
"lillab-demos/cogen",
"lillab-demos/respect",
"marc-mao/idefics2_playground",
"IncinerateZ/chatbot",
"cocktailpeanut/idefics-8b",
"Zaherrr/KG_transform",
"acecalisto3/IDE-play",
"dawood/idefics2_playground",
"jkorstad/idefics3",
"pettah/pettahaiGPT40",
"Rahulhuggingface/AAnh",
"fatima3597/AI-Podcast-Creator",
"taronsarkisyan/GPT-4o",
"ignitariumcloud/idefics2",
"jlecocq/radiology-test",
"vijaykumar85601/idefics2_playground",
"Stable-Human/idefics2_playground",
"cmaire/IDEFICS3_ROCO_ZeroGPU",
"arptakash/GPT-4o",
"LuxOAI/LUXX",
"cmaire/IDEFICS3_ROCO",
"Tamqeen/Chatbot-Llama",
"awacke1/idefics2_playground-demo",
"NekonekoID/GPT-4o",
"ggilabert/idefics2_playground",
"minhdang/OpenGPT-4o",
"vakilrathod67/Owngpt",
"LuxOAI/OpenGPT-4o",
"figh8back/fynd-idefics2-bb",
"ThinkAI-Morocco/KYA_idefics2_yalla",
"amanavinash/GPT-4o",
"sapan3012/OpenGPT-4o",
"jcheng5/multimodal",
"Zafer01/OpenGPT4",
"MasterDee/OpenGPH-4o",
"xi0v/Omni4All",
"Mandeep20/GPT-4o",
"mebinjo/OpenGPT-4o",
"sumitmeharwade/visionmodel",
"bala0o8o0/hexoticlabs-OpenGPT-4o",
"tnzly/TAI.o",
"AnViFedotov/OpenGPT-4o",
"iiced/OpenGPT-4o",
"Losthack777/mohamedsalem",
"Losthack777/OpenGPT-4o",
"Jayanath1987/JBL-OpenGPT-4o",
"Satyam-Singh/OpenAi_GPT_4-o",
"jihadzakki/idefics2_deploy",
"Jayanath1987/OpenGPT-4o",
"Anon0777/chat-app-model-hf",
"anjanprasad112/OpenGPT",
"oscarwang2/OPENCHAT",
"ka1kuk/fastapi-demo",
"Kalbe-x-Bangkit/medVQA-tester",
"Kalbe-x-Bangkit/Virtual_Question_Answering_Kalbe",
"dineth554/novafulldemov2",
"Kalbe-x-Bangkit/IDEFICS2-8B-MedicalVQA",
"Mareks1993/testing123",
"KalbeDigitalLab/IDEFICS2-8B-MedicalVQA",
"Tech-Meld/Hajax_MultiModal",
"rodrigomasini/dbkjff",
"jayyd/idefics2_playground",
"YanMASTER/OpenGPT-4o",
"Abhinay45/OpenGPT-4o",
"HuggingFaceH4/idefics2-8b-vdpoed-playground",
"Almaatla/OpenGPT-4o",
"sonar2377/OpenGPT-4o",
"saicharan1234/idefics2_playground",
"dorosara/OpenGPT-4o",
"Vinfinity/OpenGPT-4o",
"raghu8096/OpenGPT-4o",
"peterpeter8585/GPT4"
] | 1 | poster |
null | https://openreview.net/forum?id=dtPIUXdJHY | @inproceedings{
zhang2024generalization,
title={Generalization Analysis for Label-Specific Representation Learning},
author={Yifan Zhang and Min-Ling Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dtPIUXdJHY}
} | Label-specific representation learning (LSRL), i.e., constructing the representation with specific discriminative properties for each class label, is an effective strategy to improve the performance of multi-label learning. However, the generalization analysis of LSRL is still in its infancy. The existing theory bounds for multi-label learning, which preserve the coupling among different components, are invalid for LSRL. In an attempt to overcome this challenge and make up for the gap in the generalization theory of LSRL, we develop a novel vector-contraction inequality and derive the generalization bound for general function class of LSRL with a weaker dependency on the number of labels than the state of the art. In addition, we derive generalization bounds for typical LSRL methods, and these theoretical results reveal the impact of different label-specific representations on generalization analysis. The mild bounds without strong assumptions explain the good generalization ability of LSRL. | Generalization Analysis for Label-Specific Representation Learning | [
"Yifan Zhang",
"Min-Ling Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=dsMSWUBN8f | @inproceedings{
redman2024not,
title={Not so griddy: Internal representations of {RNN}s path integrating more than one agent},
author={William T Redman and Francisco Acosta and Santiago Acosta-Mendoza and Nina Miolane},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dsMSWUBN8f}
} | Success in collaborative and competitive environments, where agents must work with or against each other, requires individuals to encode the position and trajectory of themselves and others. Decades of neurophysiological experiments have shed light on how brain regions [e.g., medial entorhinal cortex (MEC), hippocampus] encode the self's position and trajectory. However, it has only recently been discovered that MEC and hippocampus are modulated by the positions and trajectories of others. To understand how encoding spatial information of multiple agents shapes neural representations, we train a recurrent neural network (RNN) model that captures properties of MEC to path integrate trajectories of two agents simultaneously navigating the same environment. We find significant differences between these RNNs and those trained to path integrate only a single agent. At the individual unit level, RNNs trained to path integrate more than one agent develop weaker grid responses, stronger border responses, and tuning for the relative position of the two agents. At the population level, they develop more distributed and robust representations, with changes in network dynamics and manifold topology. Our results provide testable predictions and open new directions with which to study the neural computations supporting spatial navigation. | Not so griddy: Internal representations of RNNs path integrating more than one agent | [
"William T Redman",
"Francisco Acosta",
"Santiago Acosta-Mendoza",
"Nina Miolane"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ds6xMV3yVV | @inproceedings{
betti2024natureinspired,
title={Nature-Inspired Local Propagation},
author={Alessandro Betti and Marco Gori},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ds6xMV3yVV}
} | The spectacular results achieved in machine learning, including the recent advances in generative AI, rely on large data collections. On the opposite, intelligent processes in nature arises without the need for such collections, but simply by on-line processing of the environmental information. In particular, natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way to respect spatiotemporal locality. This paper shows that such a feature arises from a pre-algorithmic view of learning that is inspired by related studies in Theoretical Physics. We show that the algorithmic interpretation of the derived “laws of learning”, which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity. This opens the doors to machine learning studies based on full on-line information processing that are based on the replacement of Backpropagation with the proposed spatiotemporal local algorithm. | Nature-Inspired Local Propagation | [
"Alessandro Betti",
"Marco Gori"
] | NeurIPS.cc/2024/Conference | 2402.05959 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=drpJ7KOr3F | @inproceedings{
yu2024llms,
title={{LLM}s Can Evolve Continually on Modality for \${\textbackslash}mathbb\{X\}\$-Modal Reasoning},
author={Jiazuo Yu and Haomiao Xiong and Lu Zhang and Haiwen Diao and Yunzhi Zhuge and Lanqing HONG and Dong Wang and Huchuan Lu and You He and Long Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=drpJ7KOr3F}
} | Multimodal Large Language Models (MLLMs) have gained significant attention due to their impressive capabilities in multimodal understanding. However, existing methods rely heavily on extensive modal-specific pretraining and joint-modal tuning, leading to significant computational burdens when expanding to new modalities. In this paper, we propose \textbf{PathWeave}, a flexible and scalable framework with modal-\textbf{path} s\textbf{w}itching and \textbf{e}xp\textbf{a}nsion abilities that enables MLLMs to continually \textbf{ev}olve on modalities for $\mathbb{X}$-modal reasoning. We leverage the concept of Continual Learning and develop an incremental training strategy atop pre-trained MLLMs, enabling their expansion to new modalities using uni-modal data, without executing joint-modal pretraining. In detail, a novel Adapter-in-Adapter (AnA) framework is introduced, in which uni-modal and cross-modal adapters are seamlessly integrated to facilitate efficient modality alignment and collaboration. Additionally, an MoE-based gating module is applied between two types of adapters to further enhance the multimodal interaction. To investigate the proposed method, we establish a challenging benchmark called \textbf{C}ontinual \textbf{L}earning of \textbf{M}odality (MCL), which consists of high-quality QA data from five distinct modalities: image, video, \textcolor{black}{audio, depth} and point cloud. Extensive experiments demonstrate the effectiveness of the proposed AnA framework on learning plasticity and memory stability during continual learning. Furthermore, PathWeave performs comparably to state-of-the-art MLLMs while concurrently reducing parameter training burdens by 98.73\%. Our code locates at \url{https://github.com/JiazuoYu/PathWeave}. | LLMs Can Evolve Continually on Modality for 𝕏-Modal Reasoning | [
"Jiazuo Yu",
"Haomiao Xiong",
"Lu Zhang",
"Haiwen Diao",
"Yunzhi Zhuge",
"Lanqing HONG",
"Dong Wang",
"Huchuan Lu",
"You He",
"Long Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dqdffX3BS5 | @inproceedings{
li2024an,
title={An Efficient Memory Module for Graph Few-Shot Class-Incremental Learning},
author={Dong Li and Aijia Zhang and Junqi Gao and Biqing Qi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dqdffX3BS5}
} | Graph incremental learning has gained widespread attention for its ability to mitigate catastrophic forgetting for graph neural networks (GNN). Conventional methods typically require numerous labels for node classification. However, obtaining abundant labels is often challenging in practice, which makes graph few-shot incremental learning necessary. Current approaches rely on large number of samples from meta-learning to construct memories, and heavy fine-tuning of the GNN parameters that lead to the loss of past knowledge. These result in significant memory consumption and loss of past knowledge information, respectively. To tackle these issues, We introduce Mecoin to efficient construct and Preserve memory. For efficient storage and update of class prototypes, Mecoin use Structured Memory Unit (SMU) to cache prototypes of the seen classes and update new class prototypes through interaction between nodes and the cached prototypes by Memory Construction module(MeCo). Besides, to avoid extensive parameter fine-tuning and forgetting, we introduce a Memory Representation Adaptive Module called MRaM to separate the learning of prototypes and class representations and use Graph Knowledge Interchange Module (GKIM) to injects past knowledge information into GNN. We analyze the effectiveness of our paradigm from the perspectives of generalization error, and discuss the impact of different distillation methods on model performance through experiments and VC-dimension. By comparison with other related methods, we validate that Mecoin achieves higher accuracy and lower forgetting rate. | An Efficient Memory Module for Graph Few-Shot Class-Incremental Learning | [
"Dong Li",
"Aijia Zhang",
"Junqi Gao",
"Biqing Qi"
] | NeurIPS.cc/2024/Conference | 2411.06659 | [
"https://github.com/arvin0313/mecoin-gfscil"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dqT9MC5NQl | @inproceedings{
ashman2024approximately,
title={Approximately Equivariant Neural Processes},
author={Matthew Ashman and Cristiana Diaconu and Adrian Weller and Wessel P Bruinsma and Richard E. Turner},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dqT9MC5NQl}
} | Equivariant deep learning architectures exploit symmetries in learning problems to improve the sample efficiency of neural-network-based models and their ability to generalise. However, when modelling real-world data, learning problems are often not *exactly* equivariant, but only approximately. For example, when estimating the global temperature field from weather station observations, local topographical features like mountains break translation equivariance. In these scenarios, it is desirable to construct architectures that can flexibly depart from exact equivariance in a data-driven way. Current approaches to achieving this cannot usually be applied out-of-the-box to any architecture and symmetry group. In this paper, we develop a general approach to achieving this using existing equivariant architectures. Our approach is agnostic to both the choice of symmetry group and model architecture, making it widely applicable. We consider the use of approximately equivariant architectures in neural processes (NPs), a popular family of meta-learning models. We demonstrate the effectiveness of our approach on a number of synthetic and real-world regression experiments, showing that approximately equivariant NP models can outperform both their non-equivariant and strictly equivariant counterparts. | Approximately Equivariant Neural Processes | [
"Matthew Ashman",
"Cristiana Diaconu",
"Adrian Weller",
"Wessel P Bruinsma",
"Richard E. Turner"
] | NeurIPS.cc/2024/Conference | 2406.13488 | [
"https://github.com/cambridge-mlg/tetnp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dpvqBkEp1f | @inproceedings{
ouderaa2024noethers,
title={Noether's Razor: Learning Conserved Quantities},
author={Tycho F. A. van der Ouderaa and Mark van der Wilk and Pim De Haan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dpvqBkEp1f}
} | Symmetries have proven useful in machine learning models, improving generalisation and overall performance. At the same time, recent advancements in learning dynamical systems rely on modelling the underlying Hamiltonian to guarantee the conservation of energy.
These approaches can be connected via a seminal result in mathematical physics: Noether's theorem, which states that symmetries in a dynamical system correspond to conserved quantities.
This work uses Noether's theorem to parameterise symmetries as learnable conserved quantities. We then allow conserved quantities and associated symmetries to be learned directly from train data through approximate Bayesian model selection, jointly with the regular training procedure. As training objective, we derive a variational lower bound to the marginal likelihood. The objective automatically embodies an Occam's Razor effect that avoids collapse of conversation laws to the trivial constant, without the need to manually add and tune additional regularisers. We demonstrate a proof-of-principle on n-harmonic oscillators and n-body systems. We find that our method correctly identifies the correct conserved quantities and U(n) and SE(n) symmetry groups, improving overall performance and predictive accuracy on test data. | Noether's Razor: Learning Conserved Quantities | [
"Tycho F. A. van der Ouderaa",
"Mark van der Wilk",
"Pim De Haan"
] | NeurIPS.cc/2024/Conference | 2410.08087 | [
"https://github.com/tychovdo/noethers-razor"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=doaJTihgIZ | @inproceedings{
schmid2024dynamics,
title={Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron},
author={Christian Schmid and James M Murray},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=doaJTihgIZ}
} | The ability of a brain or a neural network to efficiently learn depends crucially on both the task structure and the learning rule.
Previous works have analyzed the dynamical equations describing learning in the relatively simplified context of the perceptron under assumptions of a student-teacher framework or a linearized output.
While these assumptions have facilitated theoretical understanding, they have precluded a detailed understanding of the roles of the nonlinearity and input-data distribution in determining the learning dynamics, limiting the applicability of the theories to real biological or artificial neural networks.
Here, we use a stochastic-process approach to derive flow equations describing learning, applying this framework to the case of a nonlinear perceptron performing binary classification.
We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve and the forgetting curve as subsequent tasks are learned.
In particular, we find that the input-data noise differently affects the learning speed under SL vs. RL, as well as determines how quickly learning of a task is overwritten by subsequent learning. Additionally, we verify our approach with real data using the MNIST dataset.
This approach points a way toward analyzing learning dynamics for more-complex circuit architectures. | Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron | [
"Christian Schmid",
"James M Murray"
] | NeurIPS.cc/2024/Conference | 2409.03749 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=dmhi2ydnXZ | @inproceedings{
xu2024scalable,
title={Scalable {DBSCAN} with Random Projections},
author={HaoChuan Xu and Ninh Pham},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dmhi2ydnXZ}
} | We present sDBSCAN, a scalable density-based clustering algorithm in high dimensions with cosine distance. sDBSCAN leverages recent advancements in random projections given a significantly large number of random vectors to quickly identify core points and their neighborhoods, the primary hurdle of density-based clustering. Theoretically, sDBSCAN preserves the DBSCAN’s clustering structure under mild conditions with high probability. To facilitate sDBSCAN, we present sOPTICS, a scalable visual tool to guide the parameter setting of sDBSCAN. We also extend sDBSCAN and sOPTICS to L2, L1, χ2, and Jensen-Shannon distances via random kernel features. Empirically, sDBSCAN is significantly faster and provides higher accuracy than competitive DBSCAN variants on real-world million-point data sets. On these data sets, sDBSCAN and sOPTICS run in a few minutes, while the scikit-learn counterparts and other clustering competitors demand several hours or
cannot run on our hardware due to memory constraints. Our code is available at https://github.com/NinhPham/sDbscan. | Scalable DBSCAN with Random Projections | [
"HaoChuan Xu",
"Ninh Pham"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dlCTmEyq6y | @inproceedings{
azar2024semisupervised,
title={Semi-Supervised Sparse Gaussian Classification: Provable Benefits of Unlabeled Data},
author={Eyar Azar and Boaz Nadler},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dlCTmEyq6y}
} | The premise of semi-supervised learning (SSL) is that combining labeled and unlabeled data yields significantly more accurate models.
Despite empirical successes, the theoretical understanding of SSL is still far from complete.
In this work, we study SSL for high dimensional sparse Gaussian classification.
To construct an accurate classifier a key task is feature selection, detecting the few variables that separate the two classes.
For this SSL setting, we analyze information theoretic lower bounds for accurate feature selection as well as computational lower bounds,
assuming the low-degree likelihood hardness conjecture.
Our key contribution is the identification of a regime in the problem parameters (dimension, sparsity, number of labeled and unlabeled samples) where SSL is guaranteed to be advantageous for classification.
Specifically, there is a regime where it is possible to construct in polynomial time an accurate SSL classifier.
However, any computationally efficient supervised or unsupervised learning schemes, that separately use only the labeled or unlabeled data would fail.
Our work highlights the provable benefits of combining labeled and unlabeled data for classification and feature selection in high dimensions.
We present simulations that complement our theoretical analysis. | Semi-Supervised Sparse Gaussian Classification: Provable Benefits of Unlabeled Data | [
"Eyar Azar",
"Boaz Nadler"
] | NeurIPS.cc/2024/Conference | 2409.03335 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=dkpmfIydrF | @inproceedings{
zhang2024defensive,
title={Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models},
author={Yimeng Zhang and Xin Chen and Jinghan Jia and Yihua Zhang and Chongyu Fan and Jiancheng Liu and Mingyi Hong and Ke Ding and Sijia Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dkpmfIydrF}
} | Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but they also pose safety risks, such as the potential generation of harmful content and copyright violations. The techniques of machine unlearning, also known as concept erasing, have been developed to address these risks. However, these techniques remain vulnerable to adversarial prompt attacks, which can prompt DMs post-unlearning to regenerate undesired images containing concepts (such as nudity) meant to be erased. This work aims to enhance the robustness of concept erasing by integrating the principle of adversarial training (AT) into machine unlearning, resulting in the robust unlearning framework referred to as AdvUnlearn. However, achieving this effectively and efficiently is highly nontrivial. First, we find that a straightforward implementation of AT compromises DMs’ image generation quality post-unlearning. To address this, we develop a utility-retaining regularization on an additional retain set, optimizing the trade-off between concept erasure robustness and model utility in AdvUnlearn. Moreover, we identify the text encoder as a more suitable module for robustification compared to UNet, ensuring unlearning effectiveness. And the acquired text encoder can serve as a plug-and-play robust unlearner for various DM types. Empirically, we perform extensive experiments to demonstrate the robustness advantage of AdvUnlearn across various DM unlearning scenarios, including the erasure of nudity, objects, and style concepts. In addition to robustness, AdvUnlearn also achieves a balanced tradeoff with model utility. To our knowledge, this is the first work to systematically explore robust DM unlearning through AT, setting it apart from existing methods that overlook robustness in concept erasing. Codes are available at https://github.com/OPTML-Group/AdvUnlearn.
Warning: This paper contains model outputs that may be offensive in nature. | Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models | [
"Yimeng Zhang",
"Xin Chen",
"Jinghan Jia",
"Yihua Zhang",
"Chongyu Fan",
"Jiancheng Liu",
"Mingyi Hong",
"Ke Ding",
"Sijia Liu"
] | NeurIPS.cc/2024/Conference | 2405.15234 | [
"https://github.com/optml-group/advunlearn"
] | https://huggingface.co/papers/2405.15234 | 2 | 0 | 0 | 9 | [
"OPTML-Group/AdvUnlearn"
] | [] | [
"Intel/UnlearnDiffAtk-Benchmark",
"OPTML-Group/UnlearnDiffAtk-Unlearned-DM-Benchmark",
"xinchen9/SD_Defense",
"Kaixuanliu/SD_Defense",
"Intel/AdvUnlearn",
"Kaixuanliu/SD_Defense_gaudi"
] | [
"OPTML-Group/AdvUnlearn"
] | [] | [
"Intel/UnlearnDiffAtk-Benchmark",
"OPTML-Group/UnlearnDiffAtk-Unlearned-DM-Benchmark",
"xinchen9/SD_Defense",
"Kaixuanliu/SD_Defense",
"Intel/AdvUnlearn",
"Kaixuanliu/SD_Defense_gaudi"
] | 1 | poster |
null | https://openreview.net/forum?id=dkkgKzMni7 | @inproceedings{
kiani2024hardness,
title={Hardness of Learning Neural Networks under the Manifold Hypothesis},
author={Bobak Kiani and Jason Wang and Melanie Weber},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dkkgKzMni7}
} | The manifold hypothesis presumes that high-dimensional data lies on or near a low-dimensional manifold.
While the utility of encoding geometric structure has been demonstrated empirically, rigorous analysis of its impact on the learnability of neural networks is largely missing. Several recent results have established hardness results for learning feedforward and equivariant neural networks under i.i.d. Gaussian or uniform Boolean data distributions. In this paper, we investigate the hardness of learning under the manifold hypothesis. We ask, which minimal assumptions on the curvature and regularity of the manifold, if any, render the learning problem efficiently learnable. We prove that learning is hard under input manifolds of bounded curvature by extending proofs of hardness in the SQ and cryptographic settings for boolean data inputs to the geometric setting. On the other hand, we show that additional assumptions on the volume of the data manifold alleviate these fundamental limitations and guarantee learnability via a simple interpolation argument. Notable instances of this regime are manifolds which can be reliably reconstructed via manifold learning.
Looking forward, we comment on and empirically explore intermediate regimes of manifolds, which have heterogeneous features commonly found in real world data. | Hardness of Learning Neural Networks under the Manifold Hypothesis | [
"Bobak Kiani",
"Jason Wang",
"Melanie Weber"
] | NeurIPS.cc/2024/Conference | 2406.01461 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=diYnEYUbIU | @inproceedings{
dinh2024geometric,
title={Geometric Exploitation for Indoor Panoramic Semantic Segmentation},
author={Duc Cao Dinh and Seok Joon Kim and Kyusung Cho},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=diYnEYUbIU}
} | PAnoramic Semantic Segmentation (PASS) is an important task in computer vision,
as it enables semantic understanding of a 360° environment. Currently,
most of existing works have focused on addressing the distortion issues in 2D
panoramic images without considering spatial properties of indoor scene. This
restricts PASS methods in perceiving contextual attributes to deal with the ambiguity
when working with monocular images. In this paper, we propose a novel
approach for indoor panoramic semantic segmentation. Unlike previous works,
we consider the panoramic image as a composition of segment groups: oversampled
segments, representing planar structures such as floors and ceilings, and
under-sampled segments, representing other scene elements. To optimize each
group, we first enhance over-sampled segments by jointly optimizing with a dense
depth estimation task. Then, we introduce a transformer-based context module
that aggregates different geometric representations of the scene, combined
with a simple high-resolution branch, it serves as a robust hybrid decoder for
estimating under-sampled segments, effectively preserving the resolution of predicted
masks while leveraging various indoor geometric properties. Experimental
results on both real-world (Stanford2D3DS, Matterport3D) and synthetic (Structured3D)
datasets demonstrate the robustness of our framework, by setting new
state-of-the-arts in almost evaluations, The code and updated results are available
at: https://github.com/caodinhduc/vertical_relative_distance. | Geometric Exploitation for Indoor Panoramic Semantic Segmentation | [
"Duc Cao Dinh",
"Seok Joon Kim",
"Kyusung Cho"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dheDf5EpBT | @inproceedings{
huang2024unified,
title={Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement},
author={Zhehao Huang and Xinwen Cheng and JingHao Zheng and Haoran Wang and Zhengbao He and Tao Li and Xiaolin Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=dheDf5EpBT}
} | Machine unlearning (MU) has emerged to enhance the privacy and trustworthiness of deep neural networks. Approximate MU is a practical method for large-scale models. Our investigation into approximate MU starts with identifying the steepest descent direction, minimizing the output Kullback-Leibler divergence to exact MU inside a parameters' neighborhood. This probed direction decomposes into three components: weighted forgetting gradient ascent, fine-tuning retaining gradient descent, and a weight saliency matrix. Such decomposition derived from Euclidean metric encompasses most existing gradient-based MU methods. Nevertheless, adhering to Euclidean space may result in sub-optimal iterative trajectories due to the overlooked geometric structure of the output probability space. We suggest embedding the unlearning update into a manifold rendered by the remaining geometry, incorporating second-order Hessian from the remaining data. It helps prevent effective unlearning from interfering with the retained performance. However, computing the second-order Hessian for large-scale models is intractable. To efficiently leverage the benefits of Hessian modulation, we propose a fast-slow parameter update strategy to implicitly approximate the up-to-date salient unlearning direction.
Free from specific modal constraints, our approach is adaptable across computer vision unlearning tasks, including classification and generation. Extensive experiments validate our efficacy and efficiency. Notably, our method successfully performs class-forgetting on ImageNet using DiT and forgets a class on CIFAR-10 using DDPM in just 50 steps, compared to thousands of steps required by previous methods. Code is available at [Unified-Unlearning-w-Remain-Geometry](https://github.com/K1nght/Unified-Unlearning-w-Remain-Geometry). | Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement | [
"Zhehao Huang",
"Xinwen Cheng",
"JingHao Zheng",
"Haoran Wang",
"Zhengbao He",
"Tao Li",
"Xiaolin Huang"
] | NeurIPS.cc/2024/Conference | 2409.19732 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.