bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=WY3xgXIZUR
@inproceedings{ wang2024leveraging, title={Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning}, author={Alex Jinpeng Wang and Linjie Li and Yiqi Lin and Min Li and Lijuan Wang and Mike Zheng Shou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WY3xgXIZUR} }
Training models with longer in-context lengths is a significant challenge for multimodal machine learning due to substantial GPU memory and computational costs. This exploratory study does not present state-of-the-art models; rather, it introduces an innovative method designed to increase in-context text length in multi-modality large language models (MLLMs) efficiently. We present \ModelFullName (\ModelName), which processes long in-context text using visual tokens. This technique significantly reduces GPU memory usage and floating point operations (FLOPs). For instance, our method expands the pre-training in-context length from 256 to 2048 tokens with fewer FLOPs for a 56 billion parameter MOE model. Experimental results demonstrate that \ModelName enhances OCR capabilities and delivers superior performance on common downstream benchmarks for in-context few-shot evaluation. Additionally, \ModelName proves effective for long context inference, achieving results comparable to full text input while maintaining computational efficiency.
Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning
[ "Alex Jinpeng Wang", "Linjie Li", "Yiqi Lin", "Min Li", "Lijuan Wang", "Mike Zheng Shou" ]
NeurIPS.cc/2024/Conference
2406.02547
[ "https://github.com/showlab/VisInContext" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WXqukapoa7
@inproceedings{ li2024disentangling, title={Disentangling Linear Quadratic Control with Untrusted {ML} Predictions}, author={Tongxin Li and Hao Liu and Yisong Yue}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WXqukapoa7} }
Uncertain perturbations in dynamical systems often arise from diverse resources, represented by latent components. The predictions for these components, typically generated by "black-box" machine learning tools, are prone to inaccuracies. To tackle this challenge, we introduce DISC, a novel policy that learns a confidence parameter online to harness the potential of accurate predictions while also mitigating the impact of erroneous forecasts. When predictions are precise, DISC leverages this information to achieve near-optimal performance. Conversely, in the case of significant prediction errors, it still has a worst-case competitive ratio guarantee. We provide competitive ratio bounds for DISC under both linear mixing of latent variables as well as a broader class of mixing functions. Our results highlight a first-of-its-kind "best-of-both-worlds" integration of machine-learned predictions, thus lead to a near-optimal consistency and robustness tradeoff, which provably improves what can be obtained without learning the confidence parameter. We validate the applicability of DISC across a spectrum of practical scenarios.
Disentangling Linear Quadratic Control with Untrusted ML Predictions
[ "Tongxin Li", "Hao Liu", "Yisong Yue" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WTLvXdzhmP
@inproceedings{ zhou2024statistical, title={Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm}, author={Leo Zhou and Joao Basso and Song Mei}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WTLvXdzhmP} }
The quantum approximate optimization algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization that has been a promising avenue for near-term quantum advantage. In this paper, we analyze the performance of the QAOA on the spiked tensor model, a statistical estimation problem that exhibits a large computational-statistical gap classically. We prove that the weak recovery threshold of $1$-step QAOA matches that of $1$-step tensor power iteration. Additional heuristic calculations suggest that the weak recovery threshold of $p$-step QAOA matches that of $p$-step tensor power iteration when $p$ is a fixed constant. This further implies that multi-step QAOA with tensor unfolding could achieve, but not surpass, the asymptotic classical computation threshold $\Theta(n^{(q-2)/4})$ for spiked $q$-tensors. Meanwhile, we characterize the asymptotic overlap distribution for $p$-step QAOA, discovering an intriguing sine-Gaussian law verified through simulations. For some $p$ and $q$, the QAOA has an effective recovery threshold that is a constant factor better than tensor power iteration. Of independent interest, our proof techniques employ the Fourier transform to handle difficult combinatorial sums, a novel approach differing from prior QAOA analyses on spin-glass models without planted structure.
Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm
[ "Leo Zhou", "Joao Basso", "Song Mei" ]
NeurIPS.cc/2024/Conference
2402.19456
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=WSu1PPi2UP
@inproceedings{ tennenholtz2024embeddingaligned, title={Embedding-Aligned Language Models}, author={Guy Tennenholtz and Yinlam Chow and ChihWei Hsu and Lior Shani and Yi Liang and Craig Boutilier}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WSu1PPi2UP} }
We propose a novel approach for training large language models (LLMs) to adhere to objectives defined within a latent embedding space. Our method leverages reinforcement learning (RL), treating a pre-trained LLM as an environment. Our embedding-aligned guided language (EAGLE) agent is trained to iteratively steer the LLM's generation towards optimal regions of the latent embedding space, w.r.t. some predefined criterion. We demonstrate the effectiveness of the EAGLE agent using the MovieLens 25M and Amazon Review datasets to surface content gaps that satisfy latent user demand. We also demonstrate the benefit of using an optimal design of a state-dependent action set to improve EAGLE's efficiency. Our work paves the way for controlled and grounded text generation using LLMs, ensuring consistency with domain-specific knowledge and data representations.
Embedding-Aligned Language Models
[ "Guy Tennenholtz", "Yinlam Chow", "ChihWei Hsu", "Lior Shani", "Yi Liang", "Craig Boutilier" ]
NeurIPS.cc/2024/Conference
2406.00024
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WSsht66fbC
@inproceedings{ chirra2024safety, title={Safety through feedback in Constrained {RL}}, author={Shashank Reddy Chirra and Pradeep Varakantham and Praveen Paruchuri}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WSsht66fbC} }
In safety-critical RL settings, the inclusion of an additional cost function is often favoured over the arduous task of modifying the reward function to ensure the agent's safe behaviour. However, designing or evaluating such a cost function can be prohibitively expensive. For instance, in the domain of self-driving, designing a cost function that encompasses all unsafe behaviours (e.g., aggressive lane changes, risky overtakes) is inherently complex, it must also consider all the actors present in the scene making it expensive to evaluate. In such scenarios, the cost function can be learned from feedback collected offline in between training rounds. This feedback can be system generated or elicited from a human observing the training process. Previous approaches have not been able to scale to complex environments and are constrained to receiving feedback at the state level which can be expensive to collect. To this end, we introduce an approach that scales to more complex domains and extends beyond state-level feedback, thus, reducing the burden on the evaluator. Inferring the cost function in such settings poses challenges, particularly in assigning credit to individual states based on trajectory-level feedback. To address this, we propose a surrogate objective that transforms the problem into a state-level supervised classification task with noisy labels, which can be solved efficiently. Additionally, it is often infeasible to collect feedback for every trajectory generated by the agent, hence, two fundamental questions arise: (1) Which trajectories should be presented to the human? and (2) How many trajectories are necessary for effective learning? To address these questions, we introduce a \textit{novelty-based sampling} mechanism that selectively involves the evaluator only when the the agent encounters a \textit{novel} trajectory, and discontinues querying once the trajectories are no longer \textit{novel}. We showcase the efficiency of our method through experimentation on several benchmark Safety Gymnasium environments and realistic self-driving scenarios. Our method demonstrates near-optimal performance, comparable to when the cost function is known, by relying solely on trajectory-level feedback across multiple domains. This highlights both the effectiveness and scalability of our approach. The code to replicate these results can be found at \href{https://github.com/shshnkreddy/RLSF}{https://github.com/shshnkreddy/RLSF}
Safety through feedback in Constrained RL
[ "Shashank Reddy Chirra", "Pradeep Varakantham", "Praveen Paruchuri" ]
NeurIPS.cc/2024/Conference
2406.19626
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WRd9LCbvxN
@inproceedings{ fang2024general, title={General Articulated Objects Manipulation in Real Images via Part-Aware Diffusion Process}, author={Zhou FANG and Yong-Lu Li and Lixin Yang and Cewu Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WRd9LCbvxN} }
Articulated object manipulation in real images is a fundamental step in computer and robotic vision tasks. Recently, several image editing methods based on diffusion models have been proposed to manipulate articulated objects according to text prompts. However, these methods often generate weird artifacts or even fail in real images. To this end, we introduce the Part-Aware Diffusion Model to approach the manipulation of articulated objects in real images. First, we develop Abstract 3D Models to represent and manipulate articulated objects efficiently and arbitrarily. Then we propose dynamic feature maps to transfer the appearance of objects from input images to edited ones, meanwhile generating novel views or novel-appearing parts reasonably. Extensive experiments are provided to illustrate the advanced manipulation capabilities of our method concerning state-of-the-art editing works. Additionally, we verify our method on 3D articulated object understanding for embodied robot scenarios and the promising results prove that our method supports this task strongly. The project page is https://mvig-rhos.com/pa_diffusion.
General Articulated Objects Manipulation in Real Images via Part-Aware Diffusion Process
[ "Zhou FANG", "Yong-Lu Li", "Lixin Yang", "Cewu Lu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WRCFuoiz1h
@inproceedings{ kuroki2024queryefficient, title={Query-Efficient Correlation Clustering with Noisy Oracle}, author={Yuko Kuroki and Atsushi Miyauchi and Francesco Bonchi and Wei Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WRCFuoiz1h} }
We study a general clustering setting in which we have $n$ elements to be clustered, and we aim to perform as few queries as possible to an oracle that returns a noisy sample of the weighted similarity between two elements. Our setting encompasses many application domains in which the similarity function is costly to compute and inherently noisy. We introduce two novel formulations of online learning problems rooted in the paradigm of Pure Exploration in Combinatorial Multi-Armed Bandits (PE-CMAB): fixed confidence and fixed budget settings. For both settings, we design algorithms that combine a sampling strategy with a classic approximation algorithm for correlation clustering and study their theoretical guarantees. Our results are the first examples of polynomial-time algorithms that work for the case of PE-CMAB in which the underlying offline optimization problem is NP-hard.
Query-Efficient Correlation Clustering with Noisy Oracle
[ "Yuko Kuroki", "Atsushi Miyauchi", "Francesco Bonchi", "Wei Chen" ]
NeurIPS.cc/2024/Conference
2402.01400
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WPxa6OcIdg
@inproceedings{ chan2024estimating, title={Estimating Epistemic and Aleatoric Uncertainty with a Single Model}, author={Matthew Albert Chan and Maria J. Molina and Christopher Metzler}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WPxa6OcIdg} }
Estimating and disentangling epistemic uncertainty, uncertainty that is reducible with more training data, and aleatoric uncertainty, uncertainty that is inherent to the task at hand, is critically important when applying machine learning to high-stakes applications such as medical imaging and weather forecasting. Conditional diffusion models' breakthrough ability to accurately and efficiently sample from the posterior distribution of a dataset now makes uncertainty estimation conceptually straightforward: One need only train and sample from a large ensemble of diffusion models. Unfortunately, training such an ensemble becomes computationally intractable as the complexity of the model architecture grows. In this work we introduce a new approach to ensembling, hyper-diffusion models (HyperDM), which allows one to accurately estimate both epistemic and aleatoric uncertainty with a single model. Unlike existing single-model uncertainty methods like Monte-Carlo dropout and Bayesian neural networks, HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles. Furthermore, our proposed approach scales to modern network architectures such as Attention U-Net and yields more accurate uncertainty estimates compared to existing methods. We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting.
Estimating Epistemic and Aleatoric Uncertainty with a Single Model
[ "Matthew Albert Chan", "Maria J. Molina", "Christopher Metzler" ]
NeurIPS.cc/2024/Conference
2402.03478
[ "https://github.com/matthewachan/hyperdm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WPPC7FHtaM
@inproceedings{ du2024ipo, title={{IPO}: Interpretable Prompt Optimization for Vision-Language Models}, author={Yingjun Du and Wenfang Sun and Cees G. M. Snoek}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WPPC7FHtaM} }
Pre-trained vision-language models like CLIP have remarkably adapted to various downstream tasks. Nonetheless, their performance heavily depends on the specificity of the input text prompts, which requires skillful prompt template engineering. Instead, current approaches to prompt optimization learn the prompts through gradient descent, where the prompts are treated as adjustable parameters. However, these methods tend to lead to overfitting of the base classes seen during training and produce prompts that are no longer understandable by humans. This paper introduces a simple but interpretable prompt optimizer (IPO), that utilizes large language models (LLMs) to generate textual prompts dynamically. We introduce a Prompt Optimization Prompt that not only guides LLMs in creating effective prompts but also stores past prompts with their performance metrics, providing rich in-context information. Additionally, we incorporate a large multimodal model (LMM) to condition on visual content by generating image descriptions, which enhance the interaction between textual and visual modalities. This allows for the creation of dataset-specific prompts that improve generalization performance, while maintaining human comprehension. Extensive testing across 11 datasets reveals that IPO not only improves the accuracy of existing gradient-descent-based prompt learning methods but also considerably enhances the interpretability of the generated prompts. By leveraging the strengths of LLMs, our approach ensures that the prompts remain human-understandable, thereby facilitating better transparency and oversight for vision-language models.
IPO: Interpretable Prompt Optimization for Vision-Language Models
[ "Yingjun Du", "Wenfang Sun", "Cees G. M. Snoek" ]
NeurIPS.cc/2024/Conference
2410.15397
[ "https://github.com/lmsdss/IPO" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WOBhJs9gqU
@inproceedings{ zhang2024dualframe, title={Dual-frame Fluid Motion Estimation with Test-time Optimization and Zero-divergence Loss}, author={Yifei Zhang and Huan-ang Gao and Zhou Jiang and Hao Zhao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WOBhJs9gqU} }
3D particle tracking velocimetry (PTV) is a key technique for analyzing turbulent flow, one of the most challenging computational problems of our century. At the core of 3D PTV is the dual-frame fluid motion estimation algorithm, which tracks particles across two consecutive frames. Recently, deep learning-based methods have achieved impressive accuracy in dual-frame fluid motion estimation; however, they heavily depend on large volumes of labeled data. In this paper, we introduce a new method that is **completely self-supervised and notably outperforms its fully-supervised counterparts while requiring only 1\% of the training samples (without labels) used by previous methods.** Our method features a novel zero-divergence loss that is specific to the domain of turbulent flow. Inspired by the success of splat operation in high-dimensional filtering and random fields, we propose a splat-based implementation for this loss which is both efficient and effective. The self-supervised nature of our method naturally supports test-time optimization, leading to the development of a tailored Dynamic Velocimetry Enhancer (DVE) module. We demonstrate that strong cross-domain robustness is achieved through test-time optimization on unseen leave-one-out synthetic domains and real physical/biological domains. Code, data and models are available at [https://github.com/Forrest-110/FluidMotionNet](https://github.com/Forrest-110/FluidMotionNet).
Dual-frame Fluid Motion Estimation with Test-time Optimization and Zero-divergence Loss
[ "Yifei Zhang", "Huan-ang Gao", "Zhou Jiang", "Hao Zhao" ]
NeurIPS.cc/2024/Conference
2410.11934
[ "https://github.com/Forrest-110/FluidMotionNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WKTNdU155n
@inproceedings{ park2024llamo, title={{LL}aMo: Large Language Model-based Molecular Graph Assistant}, author={Jinyoung Park and Minseong Bae and Dohwan Ko and Hyunwoo J. Kim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WKTNdU155n} }
Large Language Models (LLMs) have demonstrated remarkable generalization and instruction-following capabilities with instruction tuning. The advancements in LLMs and instruction tuning have led to the development of Large Vision-Language Models (LVLMs). However, the competency of the LLMs and instruction tuning have been less explored in the molecular domain. Thus, we propose LLaMo: Large Language Model-based Molecular graph assistant, which is an end-to- end trained large molecular graph-language model. To bridge the discrepancy between the language and graph modalities, we present the multi-level graph projector that transforms graph representations into graph tokens by abstracting the output representations of each GNN layer and motif representations with the cross-attention mechanism. We also introduce machine-generated molecular graph instruction data to instruction-tune the large molecular graph-language model for general-purpose molecule and language understanding. Our extensive experiments demonstrate that LLaMo shows the best performance on diverse tasks, such as molecular description generation, property prediction, and IUPAC name prediction. The code of LLaMo is available at https://github.com/mlvlab/LLaMo.
LLaMo: Large Language Model-based Molecular Graph Assistant
[ "Jinyoung Park", "Minseong Bae", "Dohwan Ko", "Hyunwoo J. Kim" ]
NeurIPS.cc/2024/Conference
2411.00871
[ "https://github.com/mlvlab/llamo" ]
https://huggingface.co/papers/2411.00871
1
20
1
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=WK2KxPAMQv
@inproceedings{ shin2024exploiting, title={Exploiting Representation Curvature for Boundary Detection in Time Series}, author={Yooju Shin and Jaehyun Park and Susik Yoon and Hwanjun Song and Byung Suk Lee and Jae-Gil Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WK2KxPAMQv} }
*Boundaries* are the timestamps at which a class in a time series changes. Recently, representation-based boundary detection has gained popularity, but its emphasis on consecutive distance difference backfires, especially when the changes are gradual. In this paper, we propose a boundary detection method, **RECURVE**, based on a novel change metric, the ***curvature*** of a representation trajectory, to accommodate both gradual and abrupt changes. Here, a sequence of representations in the representation space is interpreted as a trajectory, and a curvature at each timestamp can be computed. Using the theory of random walk, we formally show that the mean curvature is lower near boundaries than at other points. Extensive experiments using diverse real-world time-series datasets confirm the superiority of RECURVE over state-of-the-art methods.
Exploiting Representation Curvature for Boundary Detection in Time Series
[ "Yooju Shin", "Jaehyun Park", "Susik Yoon", "Hwanjun Song", "Byung Suk Lee", "Jae-Gil Lee" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WJAiaslhin
@inproceedings{ r{\"u}gamer2024a, title={A Functional Extension of Semi-Structured Networks}, author={David R{\"u}gamer and Bernard X.W. Liew and Zainab Altai and Almond St{\"o}cker}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WJAiaslhin} }
Semi-structured networks (SSNs) merge the structures familiar from additive models with deep neural networks, allowing the modeling of interpretable partial feature effects while capturing higher-order non-linearities at the same time. A significant challenge in this integration is maintaining the interpretability of the additive model component. Inspired by large-scale biomechanics datasets, this paper explores extending SSNs to functional data. Existing methods in functional data analysis are promising but often not expressive enough to account for all interactions and non-linearities and do not scale well to large datasets. Although the SSN approach presents a compelling potential solution, its adaptation to functional data remains complex. In this work, we propose a functional SSN method that retains the advantageous properties of classical functional regression approaches while also improving scalability. Our numerical experiments demonstrate that this approach accurately recovers underlying signals, enhances predictive performance, and performs favorably compared to competing methods.
A Functional Extension of Semi-Structured Networks
[ "David Rügamer", "Bernard X.W. Liew", "Zainab Altai", "Almond Stöcker" ]
NeurIPS.cc/2024/Conference
2410.05430
[ "https://github.com/neural-structured-additive-learning/funnel" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WJ04ZX8txM
@inproceedings{ jiang2024do, title={Do {LLM}s dream of elephants (when told not to)? Latent concept association and associative memory in transformers}, author={Yibo Jiang and Goutham Rajendran and Pradeep Kumar Ravikumar and Bryon Aragam}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WJ04ZX8txM} }
Large Language Models (LLMs) have the capacity to store and recall facts. Through experimentation with open-source models, we observe that this ability to retrieve facts can be easily manipulated by changing contexts, even without altering their factual meanings. These findings highlight that LLMs might behave like an associative memory model where certain tokens in the contexts serve as clues to retrieving facts. We mathematically explore this property by studying how transformers, the building blocks of LLMs, can complete such memory tasks. We study a simple latent concept association problem with a one-layer transformer and we show theoretically and empirically that the transformer gathers information using self-attention and uses the value matrix for associative memory.
Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers
[ "Yibo Jiang", "Goutham Rajendran", "Pradeep Kumar Ravikumar", "Bryon Aragam" ]
NeurIPS.cc/2024/Conference
2406.18400
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WILLwyVmP8
@inproceedings{ debot2024interpretable, title={Interpretable Concept-Based Memory Reasoning}, author={David Debot and Pietro Barbiero and Francesco Giannini and Gabriele Ciravegna and Michelangelo Diligenti and Giuseppe Marra}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WILLwyVmP8} }
The lack of transparency in the decision-making processes of deep learning systems presents a significant challenge in modern artificial intelligence (AI), as it impairs users’ ability to rely on and verify these systems. To address this challenge, Concept Bottleneck Models (CBMs) have made significant progress by incorporating human-interpretable concepts into deep learning architectures. This approach allows predictions to be traced back to specific concept patterns that users can understand and potentially intervene on. However, existing CBMs’ task predictors are not fully interpretable, preventing a thorough analysis and any form of formal verification of their decision-making process prior to deployment, thereby raising significant reliability concerns. To bridge this gap, we introduce Concept-based Memory Reasoner (CMR), a novel CBM designed to provide a human-understandable and provably-verifiable task prediction process. Our approach is to model each task prediction as a neural selection mechanism over a memory of learnable logic rules, followed by a symbolic evaluation of the selected rule. The presence of an explicit memory and the symbolic evaluation allow domain experts to inspect and formally verify the validity of certain global properties of interest for the task prediction process. Experimental results demonstrate that CMR achieves better accuracy-interpretability trade-offs to state-of-the-art CBMs, discovers logic rules consistent with ground truths, allows for rule interventions, and allows pre-deployment verification.
Interpretable Concept-Based Memory Reasoning
[ "David Debot", "Pietro Barbiero", "Francesco Giannini", "Gabriele Ciravegna", "Michelangelo Diligenti", "Giuseppe Marra" ]
NeurIPS.cc/2024/Conference
2407.15527
[ "https://github.com/daviddebot/CMR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WI2VpcBdnd
@inproceedings{ chen2024provable, title={Provable and Efficient Dataset Distillation for Kernel Ridge Regression}, author={Yilan Chen and Wei Huang and Tsui-Wei Weng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WI2VpcBdnd} }
Deep learning models are now trained on increasingly larger datasets, making it crucial to reduce computational costs and improve data quality. Dataset distillation aims to distill a large dataset into a small synthesized dataset such that models trained on it can achieve similar performance to those trained on the original dataset. While there have been many empirical efforts to improve dataset distillation algorithms, a thorough theoretical analysis and provable, efficient algorithms are still lacking. In this paper, by focusing on dataset distillation for kernel ridge regression (KRR), we show that one data point per class is already necessary and sufficient to recover the original model's performance in many settings. For linear ridge regression and KRR with surjective feature mappings, we provide necessary and sufficient conditions for the distilled dataset to recover the original model's parameters. For KRR with injective feature mappings of deep neural networks, we show that while one data point per class is not sufficient in general, $k+1$ data points can be sufficient for deep linear neural networks, where $k$ is the number of classes. Our theoretical results enable directly constructing analytical solutions for distilled datasets, resulting in a provable and efficient dataset distillation algorithm for KRR. We verify our theory experimentally and show that our algorithm outperforms previous work such as KIP while being significantly more efficient, e.g. 15840$\times$ faster on CIFAR-100.
Provable and Efficient Dataset Distillation for Kernel Ridge Regression
[ "Yilan Chen", "Wei Huang", "Tsui-Wei Weng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WH5blx5tZ1
@inproceedings{ gardner2024large, title={Large Scale Transfer Learning for Tabular Data via Language Modeling}, author={Joshua P Gardner and Juan Carlos Perdomo and Ludwig Schmidt}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WH5blx5tZ1} }
Tabular data – structured, heterogeneous, spreadsheet-style data with rows and columns – is widely used in practice across many domains. However, while recent foundation models have reduced the need for developing task-specific datasets and predictors in domains such as language modeling and computer vision, this transfer learning paradigm has not had similar impact in the tabular domain. In this work, we seek to narrow this gap and present TABULA-8B, a language model for tabular prediction. We define a process for extracting a large, high-quality training dataset from the TabLib corpus, proposing methods for tabular data filtering and quality control. Using the resulting dataset, which comprises over 2.1B rows from 4.2M unique tables, we fine-tune a Llama 3-8B large language model (LLM) for tabular data prediction (classification and binned regression) using a novel packing and attention scheme for tabular prediction. Through evaluation across a test suite of 329 datasets, we find that TABULA-8B has zero-shot accuracy on unseen tables that is over 15 percentage points (pp) higher than random guessing, a feat that is not possible with existing state-of-the-art tabular prediction models (e.g. XGBoost, TabPFN). In the few-shot setting (1-32 shots), without any fine-tuning on the target datasets, TABULA-8B is 5-15 pp more accurate than XGBoost and TabPFN models that are explicitly trained on equal, or even up to 16× more data. We release our model, code, and data along with the publication of this paper.
Large Scale Transfer Learning for Tabular Data via Language Modeling
[ "Joshua P Gardner", "Juan Carlos Perdomo", "Ludwig Schmidt" ]
NeurIPS.cc/2024/Conference
[ "https://github.com/mlfoundations/tabliblib" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WFbZusv14E
@inproceedings{ wang2024alpine, title={{ALPINE}: Unveiling The Planning Capability of Autoregressive Learning in Language Models}, author={Siwei Wang and Yifei Shen and Shi Feng and Haoran Sun and Shang-Hua Teng and Wei Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WFbZusv14E} }
Planning is a crucial element of both human intelligence and contemporary large language models (LLMs). In this paper, we initiate a theoretical investigation into the emergence of planning capabilities in Transformer-based LLMs via their next-word prediction mechanisms. We model planning as a network path-finding task, where the objective is to generate a valid path from a specified source node to a designated target node. Our mathematical characterization shows that Transformer architectures can execute path-finding by embedding the adjacency and reachability matrices within their weights. Furthermore, our theoretical analysis of gradient-based learning dynamics reveals that LLMs can learn both the adjacency and a limited form of the reachability matrices. These theoretical insights are then validated through experiments, which demonstrate that Transformer architectures indeed learn the adjacency and an incomplete reachability matrices, consistent with our theoretical predictions. When applying our methodology to the real-world planning benchmark Blocksworld, our observations remain consistent. Additionally, our analyses uncover a fundamental limitation of current Transformer architectures in path-finding: these architectures cannot identify reachability relationships through transitivity, which leads to failures in generating paths when concatenation is required. These findings provide new insights into how the internal mechanisms of autoregressive learning facilitate intelligent planning and deepen our understanding of how future LLMs might achieve more advanced and general planning-and-reasoning capabilities across diverse applications.
ALPINE: Unveiling The Planning Capability of Autoregressive Learning in Language Models
[ "Siwei Wang", "Yifei Shen", "Shi Feng", "Haoran Sun", "Shang-Hua Teng", "Wei Chen" ]
NeurIPS.cc/2024/Conference
2405.09220
[ "" ]
https://huggingface.co/papers/2405.09220
0
24
1
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=WEs4WMzndY
@inproceedings{ perera2024annealed, title={Annealed Multiple Choice Learning: Overcoming limitations of Winner-takes-all with annealing}, author={David Perera and Victor Letzelter and Theo Mariotte and Adrien Cortes and Mickael Chen and Slim Essid and Ga{\"e}l Richard}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WEs4WMzndY} }
We introduce Annealed Multiple Choice Learning (aMCL) which combines simulated annealing with MCL. MCL is a learning framework handling ambiguous tasks by predicting a small set of plausible hypotheses. These hypotheses are trained using the Winner-takes-all (WTA) scheme, which promotes the diversity of the predictions. However, this scheme may converge toward an arbitrarily suboptimal local minimum, due to the greedy nature of WTA. We overcome this limitation using annealing, which enhances the exploration of the hypothesis space during training. We leverage insights from statistical physics and information theory to provide a detailed description of the model training trajectory. Additionally, we validate our algorithm by extensive experiments on synthetic datasets, on the standard UCI benchmark, and on speech separation.
Annealed Multiple Choice Learning: Overcoming limitations of Winner-takes-all with annealing
[ "David Perera", "Victor Letzelter", "Theo Mariotte", "Adrien Cortes", "Mickael Chen", "Slim Essid", "Gaël Richard" ]
NeurIPS.cc/2024/Conference
2407.15580
[ "https://github.com/victorletzelter/annealed_mcl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WEoOreP0n5
@inproceedings{ arnob2024efficient, title={Efficient Reinforcement Learning by Discovering Neural Pathways}, author={Samin Yeasar Arnob and Riyasat Ohib and Sergey M. Plis and Amy Zhang and Alessandro Sordoni and Doina Precup}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WEoOreP0n5} }
Reinforcement learning (RL) algorithms have been very successful at tackling complex control problems, such as AlphaGo or fusion control. However, current research mainly emphasizes solution quality, often achieved by using large models trained on large amounts of data, and does not account for the financial, environmental, and societal costs associated with developing and deploying such models. Modern neural networks are often overparameterized and a significant number of parameters can be pruned without meaningful loss in performance, resulting in more efficient use of the model's capacity lottery ticket. We present a methodology for identifying sub-networks within a larger network in reinforcement learning (RL). We call such sub-networks, neural pathways. We show empirically that even very small learned sub-networks, using less than 5% of the large network's parameters, can provide very good quality solutions. We also demonstrate the training of multiple pathways within the same networks in a multitask setup, where each pathway is encouraged to tackle a separate task. We evaluate empirically our approach on several continuous control tasks, in both online and offline training
Efficient Reinforcement Learning by Discovering Neural Pathways
[ "Samin Yeasar Arnob", "Riyasat Ohib", "Sergey M. Plis", "Amy Zhang", "Alessandro Sordoni", "Doina Precup" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WEf2LT8NtY
@inproceedings{ tang2024adversarially, title={Adversarially Robust Decision Transformer}, author={Xiaohang Tang and Afonso Marques and Parameswaran Kamalaruban and Ilija Bogunovic}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WEf2LT8NtY} }
Decision Transformer (DT), as one of the representative Reinforcement Learning via Supervised Learning (RvS) methods, has achieved strong performance in offline learning tasks by leveraging the powerful Transformer architecture for sequential decision-making. However, in adversarial environments, these methods can be non-robust, since the return is dependent on the strategies of both the decision-maker and adversary. Training a probabilistic model conditioned on observed return to predict action can fail to generalize, as the trajectories that achieve a return in the dataset might have done so due to a suboptimal behavior adversary. To address this, we propose a worst-case-aware RvS algorithm, the Adversarially Robust Decision Transformer (ARDT), which learns and conditions the policy on in-sample minimax returns-to-go. ARDT aligns the target return with the worst-case return learned through minimax expectile regression, thereby enhancing robustness against powerful test-time adversaries. In experiments conducted on sequential games with full data coverage, ARDT can generate a maximin (Nash Equilibrium) strategy, the solution with the largest adversarial robustness. In large-scale sequential games and continuous adversarial RL environments with partial data coverage, ARDT demonstrates significantly superior robustness to powerful test-time adversaries and attains higher worst-case returns compared to contemporary DT methods.
Adversarially Robust Decision Transformer
[ "Xiaohang Tang", "Afonso Marques", "Parameswaran Kamalaruban", "Ilija Bogunovic" ]
NeurIPS.cc/2024/Conference
2407.18414
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WDX45LNZXE
@inproceedings{ li2024onelayer, title={One-Layer Transformer Provably Learns One-Nearest Neighbor In Context}, author={Zihao Li and Yuan Cao and Cheng Gao and Yihan He and Han Liu and Jason Matthew Klusowski and Jianqing Fan and Mengdi Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WDX45LNZXE} }
Transformers have achieved great success in recent years. Interestingly, transformers have shown particularly strong in-context learning capability -- even without fine-tuning, they are still able to solve unseen tasks well purely based on task-specific prompts. In this paper, we study the capability of one-layer transformers in learning the one-nearest neighbor prediction rule. Under a theoretical framework where the prompt contains a sequence of labeled training data and unlabeled test data, we show that, although the loss function is nonconvex, when trained with gradient descent, a single softmax attention layer can successfully learn to behave like a one-nearest neighbor classifier. Our result gives a concrete example on how transformers can be trained to implement nonparametric machine learning algorithms, and sheds light on the role of softmax attention in transformer models.
One-Layer Transformer Provably Learns One-Nearest Neighbor In Context
[ "Zihao Li", "Yuan Cao", "Cheng Gao", "Yihan He", "Han Liu", "Jason Matthew Klusowski", "Jianqing Fan", "Mengdi Wang" ]
NeurIPS.cc/2024/Conference
2411.10830
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WCnJmb7cv1
@inproceedings{ myers2024learning, title={Learning to Assist Humans without Inferring Rewards}, author={Vivek Myers and Evan Ellis and Sergey Levine and Benjamin Eysenbach and Anca Dragan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WCnJmb7cv1} }
Assistive agents should make humans' lives easier. Classically, such assistance is studied through the lens of inverse reinforcement learning, where an assistive agent (e.g., a chatbot, a robot) infers a human's intention and then selects actions to help the human reach that goal. This approach requires inferring intentions, which can be difficult in high-dimensional settings. We build upon prior work that studies assistance through the lens of empowerment: an assistive agent aims to maximize the influence of the human's actions such that they exert a greater control over the environmental outcomes and can solve tasks in fewer steps. We lift the major limitation of prior work in this area—scalability to high-dimensional settings—with contrastive successor representations. We formally prove that these representations estimate a similar notion of empowerment to that studied by prior work and provide a ready-made mechanism for optimizing it. Empirically, our proposed method outperforms prior methods on synthetic benchmarks, and scales to Overcooked, a cooperative game setting. Theoretically, our work connects ideas from information theory, neuroscience, and reinforcement learning, and charts a path for representations to play a critical role in solving assistive problems. Our code is available at https://anonymous.4open.science/r/esr-7E94.
Learning to Assist Humans without Inferring Rewards
[ "Vivek Myers", "Evan Ellis", "Sergey Levine", "Benjamin Eysenbach", "Anca Dragan" ]
NeurIPS.cc/2024/Conference
2411.02623
[ "https://github.com/vivekmyers/empowerment_successor_representations" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WCc440cUhX
@inproceedings{ nguyen2024understanding, title={Understanding Transformers via N-Gram Statistics}, author={Timothy Nguyen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WCc440cUhX} }
Transformer based large-language models (LLMs) display extreme proficiency with language yet a precise understanding of how they work remains elusive. One way of demystifying transformer predictions would be to describe how they depend on their context in terms of simple template functions. This paper takes a first step in this direction by considering families of functions (i.e. rules) formed out of simple N-gram based statistics of the training data. By studying how well these rulesets approximate transformer predictions, we obtain a variety of novel discoveries: a simple method to detect overfitting during training without using a holdout set, a quantitative measure of how transformers progress from learning simple to more complex statistical rules over the course of training, a model-variance criterion governing when transformer predictions tend to be described by N-gram rules, and insights into how well transformers can be approximated by N-gram rulesets in the limit where these rulesets become increasingly complex. In this latter direction, we find that for 79% and 68% of LLM next-token distributions on TinyStories and Wikipedia, respectively, their top-1 predictions agree with those provided by our N-gram rulesets.
Understanding Transformers via N-Gram Statistics
[ "Timothy Nguyen" ]
NeurIPS.cc/2024/Conference
2407.12034
[ "https://github.com/google-deepmind/transformer_ngrams" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WBLPlszJI5
@inproceedings{ allouah2024finetuning, title={Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients}, author={Youssef Allouah and Abdellah El Mrini and Rachid Guerraoui and Nirupam Gupta and Rafael Pinot}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WBLPlszJI5} }
Federated learning (FL) is an appealing paradigm that allows a group of machines (a.k.a. clients) to learn collectively while keeping their data local. However, due to the heterogeneity between the clients’ data distributions, the model obtained through the use of FL algorithms may perform poorly on some client’s data. Personalization addresses this issue by enabling each client to have a different model tailored to their own data while simultaneously benefiting from the other clients’ data. We consider an FL setting where some clients can be adversarial, and we derive conditions under which full collaboration fails. Specifically, we analyze the generalization performance of an interpolated personalized FL framework in the presence of adversarial clients, and we precisely characterize situations when full collaboration performs strictly worse than fine-tuned personalization. Our analysis determines how much we should scale down the level of collaboration, according to data heterogeneity and the tolerable fraction of adversarial clients. We support our findings with empirical results on mean estimation and binary classification problems, considering synthetic and benchmark image classification datasets
Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients
[ "Youssef Allouah", "Abdellah El Mrini", "Rachid Guerraoui", "Nirupam Gupta", "Rafael Pinot" ]
NeurIPS.cc/2024/Conference
2409.20329
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WAiqLGfqX6
@inproceedings{ qiu2024derivativeenhanced, title={Derivative-enhanced Deep Operator Network}, author={Yuan Qiu and Nolan Bridges and Peng Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WAiqLGfqX6} }
The deep operator networks (DeepONet), a class of neural operators that learn mappings between function spaces, have recently been developed as surrogate models for parametric partial differential equations (PDEs). In this work we propose a derivative-enhanced deep operator network (DE-DeepONet), which leverages derivative information to enhance the solution prediction accuracy and provides a more accurate approximation of solution-to-parameter derivatives, especially when training data are limited. DE-DeepONet explicitly incorporates linear dimension reduction of high dimensional parameter input into DeepONet to reduce training cost and adds derivative loss in the loss function to reduce the number of required parameter-solution pairs. We further demonstrate that the use of derivative loss can be extended to enhance other neural operators, such as the Fourier neural operator (FNO). Numerical experiments validate the effectiveness of our approach.
Derivative-enhanced Deep Operator Network
[ "Yuan Qiu", "Nolan Bridges", "Peng Chen" ]
NeurIPS.cc/2024/Conference
2402.19242
[ "https://github.com/qy849/de-deeponet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WAT3qu737X
@inproceedings{ cortes2024cardinalityaware, title={Cardinality-Aware Set Prediction and Top-\$k\$ Classification}, author={Corinna Cortes and Anqi Mao and Christopher Mohri and Mehryar Mohri and Yutao Zhong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=WAT3qu737X} }
We present a detailed study of cardinality-aware top-$k$ classification, a novel approach that aims to learn an accurate top-$k$ set predictor while maintaining a low cardinality. We introduce a new target loss function tailored to this setting that accounts for both the classification error and the cardinality of the set predicted. To optimize this loss function, we propose two families of surrogate losses: cost-sensitive comp-sum losses and cost-sensitive constrained losses. Minimizing these loss functions leads to new cardinality-aware algorithms that we describe in detail in the case of both top-$k$ and threshold-based classifiers. We establish $H$-consistency bounds for our cardinality-aware surrogate loss functions, thereby providing a strong theoretical foundation for our algorithms. We report the results of extensive experiments on CIFAR-10, CIFAR-100, ImageNet, and SVHN datasets demonstrating the effectiveness and benefits of our cardinality-aware algorithms.
Cardinality-Aware Set Prediction and Top-k Classification
[ "Corinna Cortes", "Anqi Mao", "Christopher Mohri", "Mehryar Mohri", "Yutao Zhong" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=W8rFsaKr4m
@inproceedings{ xiao2024mambatree, title={MambaTree: Tree Topology is All You Need in State Space Model}, author={Yicheng Xiao and Lin Song and Shaoli Huang and Jiangshan Wang and Siyu Song and Yixiao Ge and Xiu Li and Ying Shan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W8rFsaKr4m} }
The state space models, employing recursively propagated features, demonstrate strong representation capabilities comparable to Transformer models and superior efficiency. However, constrained by the inherent geometric constraints of sequences, it still falls short in modeling long-range dependencies. To address this issue, we propose the MambaTree network, which first dynamically generates a tree topology based on spatial relationships and input features. Then, feature propagation is performed based on this graph, thereby breaking the original sequence constraints to achieve stronger representation capabilities. Additionally, we introduce a linear complexity dynamic programming algorithm to enhance long-range interactions without increasing computational cost. MambaTree is a versatile multimodal framework that can be applied to both visual and textual tasks. Extensive experiments demonstrate that our method significantly outperforms existing structured state space models on image classification, object detection and segmentation. Besides, by fine-tuning large language models, our approach achieves consistent improvements in multiple textual tasks at minor training cost.
MambaTree: Tree Topology is All You Need in State Space Model
[ "Yicheng Xiao", "Lin Song", "Shaoli Huang", "Jiangshan Wang", "Siyu Song", "Yixiao Ge", "Xiu Li", "Ying Shan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=W89fKKP2AO
@inproceedings{ zhou2024universal, title={Universal Neural Functionals}, author={Allan Zhou and Chelsea Finn and James Harrison}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W89fKKP2AO} }
A challenging problem in many modern machine learning tasks is to process weight-space features, i.e., to transform or extract information from the weights and gradients of a neural network. Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks. However, they are not applicable to general architectures, since the permutation symmetries of a weight space can be complicated by recurrence or residual connections. This work proposes an algorithm that automatically constructs permutation equivariant models, which we refer to as universal neural functionals (UNFs), for any weight space. Among other applications, we demonstrate how UNFs can be substituted into existing learned optimizer designs, and find promising improvements over prior methods when optimizing small image classifiers and language models. Our results suggest that learned optimizers can benefit from considering the (symmetry) structure of the weight space they optimize.
Universal Neural Functionals
[ "Allan Zhou", "Chelsea Finn", "James Harrison" ]
NeurIPS.cc/2024/Conference
2402.05232
[ "https://github.com/allanyangzhou/universal_neural_functional" ]
https://huggingface.co/papers/2402.05232
0
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=W5U3XB1C11
@inproceedings{ suresh2024relational, title={Relational Verification Leaps Forward with {RABB}it}, author={Tarun Suresh and Debangshu Banerjee and Gagandeep Singh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W5U3XB1C11} }
We propose RABBit, a Branch-and-Bound-based verifier for verifying relational properties defined over Deep Neural Networks, such as robustness against universal adversarial perturbations (UAP). Existing SOTA complete $L_{\infty}$-robustness verifiers can not reason about dependencies between multiple executions and, as a result, are imprecise for relational verification. In contrast, existing SOTA relational verifiers only apply a single bounding step and do not utilize any branching strategies to refine the obtained bounds, thus producing imprecise results. We develop the first scalable Branch-and-Bound-based relational verifier, RABBit, which efficiently combines branching over multiple executions with cross-executional bound refinement to utilize relational constraints, gaining substantial precision over SOTA baselines on a wide range of datasets and networks. Our code is at https://github.com/uiuc-focal-lab/RABBit.
Relational Verification Leaps Forward with RABBit
[ "Tarun Suresh", "Debangshu Banerjee", "Gagandeep Singh" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=W4pIBQ7bAI
@inproceedings{ li2024mediq, title={MediQ: Question-Asking {LLM}s and a Benchmark for Reliable Interactive Clinical Reasoning}, author={Shuyue Stella Li and Vidhisha Balachandran and Shangbin Feng and Jonathan S. Ilgen and Emma Pierson and Pang Wei Koh and Yulia Tsvetkov}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W4pIBQ7bAI} }
Users typically engage with LLMs interactively, yet most existing benchmarks evaluate them in a static, single-turn format, posing reliability concerns in interactive scenarios. We identify a key obstacle towards reliability: LLMs are trained to answer any question, even with incomplete context or insufficient knowledge. In this paper, we propose to change the static paradigm to an interactive one, develop systems that proactively ask questions to gather more information and respond reliably, and introduce an benchmark—MEDIQ—to evaluate question-asking ability in LLMs. MEDIQ simulates clinical interactions consisting of a Patient System and an adaptive Expert System; with potentially incomplete initial information, the Expert refrains from making diagnostic decisions when unconfident, and instead elicits missing details via follow-up questions. We provide a pipeline to convert single-turn medical benchmarks into an interactive format. Our results show that directly prompting state-of-the-art LLMs to ask questions degrades performance, indicating that adapting LLMs to proactive information-seeking settings is nontrivial. We experiment with abstention strategies to better estimate model confidence and decide when to ask questions, improving diagnostic accuracy by 22.3%; however, performance still lags compared to an (unrealistic in practice) upper bound with complete information upfront. Further analyses show improved interactive performance with filtering irrelevant contexts and reformatting conversations. Overall, we introduce a novel problem towards LLM reliability, an interactive MEDIQ benchmark and a novel question-asking system, and highlight directions to extend LLMs’ information-seeking abilities in critical domains.
MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning
[ "Shuyue Stella Li", "Vidhisha Balachandran", "Shangbin Feng", "Jonathan S. Ilgen", "Emma Pierson", "Pang Wei Koh", "Yulia Tsvetkov" ]
NeurIPS.cc/2024/Conference
2406.00922
[ "https://github.com/stellali7/mediq" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=W433RI0VU4
@inproceedings{ liu2024milpstudio, title={{MILP}-StuDio: {MILP} Instance Generation via Block Structure Decomposition}, author={Haoyang Liu and Jie Wang and Wanbo Zhang and Zijie Geng and Yufei Kuang and Xijun Li and Bin Li and Yongdong Zhang and Feng Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W433RI0VU4} }
Mixed-integer linear programming (MILP) is one of the most popular mathematical formulations with numerous applications. In practice, improving the performance of MILP solvers often requires a large amount of high-quality data, which can be challenging to collect. Researchers thus turn to generation techniques to generate additional MILP instances. However, existing approaches do not take into account specific block structures—which are closely related to the problem formulations—in the constraint coefficient matrices (CCMs) of MILPs. Consequently, they are prone to generate computationally trivial or infeasible instances due to the disruptions of block structures and thus problem formulations. To address this challenge, we propose a novel MILP generation framework, called Block Structure Decomposition (MILP-StuDio), to generate high-quality instances by preserving the block structures. Specifically, MILP-StuDio begins by identifying the blocks in CCMs and decomposing the instances into block units, which serve as the building blocks of MILP instances. We then design three operators to construct new instances by removing, substituting, and appending block units in the original instances, enabling us to generate instances with flexible sizes. An appealing feature of MILP-StuDio is its strong ability to preserve the feasibility and computational hardness of the generated instances. Experiments on the commonly-used benchmarks demonstrate that using instances generated by MILP-StuDio is able to significantly reduce over 10% of the solving time for learning-based solvers.
MILP-StuDio: MILP Instance Generation via Block Structure Decomposition
[ "Haoyang Liu", "Jie Wang", "Wanbo Zhang", "Zijie Geng", "Yufei Kuang", "Xijun Li", "Bin Li", "Yongdong Zhang", "Feng Wu" ]
NeurIPS.cc/2024/Conference
2410.22806
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=W3Dx1TGW3f
@inproceedings{ thoma2024contextual, title={Contextual Bilevel Reinforcement Learning for Incentive Alignment}, author={Vinzenz Thoma and Barna P{\'a}sztor and Andreas Krause and Giorgia Ramponi and Yifan Hu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W3Dx1TGW3f} }
The optimal policy in various real-world strategic decision-making problems depends both on the environmental configuration and exogenous events. For these settings, we introduce Contextual Bilevel Reinforcement Learning (CB-RL), a stochastic bilevel decision-making model, where the lower level consists of solving a contextual Markov Decision Process (CMDP). CB-RL can be viewed as a Stackelberg Game where the leader and a random context beyond the leader’s control together decide the setup of many MDPs that potentially multiple followers best respond to. This framework extends beyond traditional bilevel optimization and finds relevance in diverse fields such as RLHF, tax design, reward shaping, contract theory and mechanism design. We propose a stochastic Hyper Policy Gradient Descent (HPGD) algorithm to solve CB-RL, and demonstrate its convergence. Notably, HPGD uses stochastic hypergradient estimates, based on observations of the followers’ trajectories. Therefore, it allows followers to use any training procedure and the leader to be agnostic of the specific algorithm, which aligns with various real-world scenarios. We further consider the setting when the leader can influence the training of followers and propose an accelerated algorithm. We empirically demonstrate the performance of our algorithm for reward shaping and tax design.
Contextual Bilevel Reinforcement Learning for Incentive Alignment
[ "Vinzenz Thoma", "Barna Pásztor", "Andreas Krause", "Giorgia Ramponi", "Yifan Hu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=W2qGSMl2Uu
@inproceedings{ wang2024contextgs, title={Context{GS} : Compact 3D Gaussian Splatting with Anchor Level Context Model}, author={Yufei Wang and Zhihao Li and Lanqing Guo and Wenhan Yang and Alex Kot and Bihan Wen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W2qGSMl2Uu} }
Recently, 3D Gaussian Splatting (3DGS) has become a promising framework for novel view synthesis, offering fast rendering speeds and high fidelity. However, the large number of Gaussians and their associated attributes require effective compression techniques. Existing methods primarily compress neural Gaussians individually and independently, i.e., coding all the neural Gaussians at the same time, with little design for their interactions and spatial dependence. Inspired by the effectiveness of the context model in image compression, we propose the first autoregressive model at the anchor level for 3DGS compression in this work. We divide anchors into different levels and the anchors that are not coded yet can be predicted based on the already coded ones in all the coarser levels, leading to more accurate modeling and higher coding efficiency. To further improve the efficiency of entropy coding, e.g., to code the coarsest level with no already coded anchors, we propose to introduce a low-dimensional quantized feature as the hyperprior for each anchor, which can be effectively compressed. Our work pioneers the context model in the anchor level for 3DGS representation, yielding an impressive size reduction of over 100 times compared to vanilla 3DGS and 15 times compared to the most recent state-of-the-art work Scaffold-GS, while achieving comparable or even higher rendering quality.
ContextGS : Compact 3D Gaussian Splatting with Anchor Level Context Model
[ "Yufei Wang", "Zhihao Li", "Lanqing Guo", "Wenhan Yang", "Alex Kot", "Bihan Wen" ]
NeurIPS.cc/2024/Conference
[ "https://github.com/wyf0912/contextgs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=W0wq9njGHi
@inproceedings{ li2024kaleidoscope, title={Kaleidoscope: Learnable Masks for Heterogeneous Multi-agent Reinforcement Learning}, author={Xinran Li and Ling Pan and Jun Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W0wq9njGHi} }
In multi-agent reinforcement learning (MARL), parameter sharing is commonly employed to enhance sample efficiency. However, the popular approach of full parameter sharing often leads to homogeneous policies among agents, potentially limiting the performance benefits that could be derived from policy diversity. To address this critical limitation, we introduce \emph{Kaleidoscope}, a novel adaptive partial parameter sharing scheme that fosters policy heterogeneity while still maintaining high sample efficiency. Specifically, Kaleidoscope maintains one set of common parameters alongside multiple sets of distinct, learnable masks for different agents, dictating the sharing of parameters. It promotes diversity among policy networks by encouraging discrepancy among these masks, without sacrificing the efficiencies of parameter sharing. This design allows Kaleidoscope to dynamically balance high sample efficiency with a broad policy representational capacity, effectively bridging the gap between full parameter sharing and non-parameter sharing across various environments. We further extend Kaleidoscope to critic ensembles in the context of actor-critic algorithms, which could help improve value estimations. Our empirical evaluations across extensive environments, including multi-agent particle environment, multi-agent MuJoCo and StarCraft multi-agent challenge v2, demonstrate the superior performance of Kaleidoscope compared with existing parameter sharing approaches, showcasing its potential for performance enhancement in MARL. The code is publicly available at \url{https://github.com/LXXXXR/Kaleidoscope}.
Kaleidoscope: Learnable Masks for Heterogeneous Multi-agent Reinforcement Learning
[ "Xinran Li", "Ling Pan", "Jun Zhang" ]
NeurIPS.cc/2024/Conference
2410.08540
[ "https://github.com/lxxxxr/kaleidoscope" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=W0okTgsPvM
@inproceedings{ huang2024multimodal, title={Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning}, author={Brandon Huang and Chancharik Mitra and Leonid Karlinsky and Assaf Arbelle and Trevor Darrell and Roei Herzig}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=W0okTgsPvM} }
The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning suggests that in-context learning (ICL) with many examples can be promising for learning new tasks. However, this many-shot multimodal ICL setting has one crucial problem: it is fundamentally limited by the model's context length set at pretraining. The problem is especially prominent in the multimodal domain, which processes both text and images, requiring additional tokens. This motivates the need for a multimodal method to compress many shots into fewer tokens without finetuning. In this work, we enable LMMs to perform multimodal, many-shot in-context learning by leveraging Multimodal Task Vectors (MTV)---compact implicit representations of in-context examples compressed in the model's attention heads. Specifically, we first demonstrate the existence of such MTV in LMMs and then leverage these extracted MTV to enable many-shot in-context learning for various vision-and-language tasks. Our experiments suggest that MTV can scale in performance with the number of compressed shots and generalize to similar out-of-domain tasks without additional context length for inference. Code: https://github.com/Brandon3964/MultiModal-Task-Vector
Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning
[ "Brandon Huang", "Chancharik Mitra", "Leonid Karlinsky", "Assaf Arbelle", "Trevor Darrell", "Roei Herzig" ]
NeurIPS.cc/2024/Conference
2406.15334
[ "" ]
https://huggingface.co/papers/2406.15334
0
8
1
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=VzoyBrqJ4O
@inproceedings{ wang2024depth, title={Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation}, author={Ning-Hsu Wang and Yu-Lun Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VzoyBrqJ4O} }
Accurately estimating depth in 360-degree imagery is crucial for virtual reality, autonomous navigation, and immersive media applications. Existing depth estimation methods designed for perspective-view imagery fail when applied to 360-degree images due to different camera projections and distortions. We propose a new depth estimation framework that uses unlabeled 360-degree data effectively. Our approach uses state-of-the-art perspective depth estimation models as teacher models to generate pseudo labels through a six-face cube projection technique, enabling efficient labeling of depth in 360-degree images. This method leverages the increasing availability of large datasets. It includes two main stages: offline mask generation for invalid regions and an online semi-supervised joint training regime. We tested our approach on benchmark datasets such as Matterport3D and Stanford2D3D, showing significant improvements in depth estimation accuracy, particularly in zero-shot scenarios. Our proposed training pipeline can enhance any 360 monocular depth estimator and demonstrate effective knowledge transfer across different camera projections and data types.
Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation
[ "Ning-Hsu Wang", "Yu-Lun Liu" ]
NeurIPS.cc/2024/Conference
2406.12849
[ "" ]
https://huggingface.co/papers/2406.12849
1
49
2
2
[]
[]
[ "Albert-NHWang/Depth-Anywhere-App", "freealise/Depth-Anywhere-App" ]
[]
[]
[ "Albert-NHWang/Depth-Anywhere-App", "freealise/Depth-Anywhere-App" ]
1
poster
null
https://openreview.net/forum?id=VzOgnDJMgh
@inproceedings{ jia2024wagle, title={{WAGLE}: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models}, author={Jinghan Jia and Jiancheng Liu and Yihua Zhang and Parikshit Ram and Nathalie Baracaldo and Sijia Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VzOgnDJMgh} }
The need for effective unlearning mechanisms in large language models (LLMs) is increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical generative AI practices. LLM unlearning is designed to reduce the impact of undesirable data influences and associated model capabilities without diminishing the utility of the model if unrelated to the information being forgotten. Despite growing interest, much of the existing research has focused on varied unlearning method designs to boost effectiveness and efficiency. However, the inherent relationship between model weights and LLM unlearning has not been extensively examined. In this paper, we systematically explore how model weights interact with unlearning processes in LLMs and we design the weight attribution-guided LLM unlearning method, WAGLE, which unveils the interconnections between 'influence' of weights and 'influence' of data to forget and retain in LLM generation. By strategically guiding the LLM unlearning across different types of unlearning methods and tasks, WAGLE can erase the undesired content, while maintaining the performance of the original tasks. We refer to the weight attribution-guided LLM unlearning method as WAGLE, which unveils the interconnections between 'influence' of weights and 'influence' of data to forget and retain in LLM generation. Our extensive experiments show that WAGLE boosts unlearning performance across a range of LLM unlearning methods such as gradient difference and (negative) preference optimization, applications such as fictitious unlearning (TOFU benchmark), malicious use prevention (WMDP benchmark), and copyrighted information removal, and models including Zephyr-7b-beta and Llama2-7b. To the best of our knowledge, our work offers the first principled method for attributing and pinpointing the influential weights in enhancing LLM unlearning. It stands in contrast to previous methods that lack weight attribution and simpler weight attribution techniques.
WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models
[ "Jinghan Jia", "Jiancheng Liu", "Yihua Zhang", "Parikshit Ram", "Nathalie Baracaldo", "Sijia Liu" ]
NeurIPS.cc/2024/Conference
2410.17509
[ "https://github.com/OPTML-Group/WAGLE" ]
https://huggingface.co/papers/2410.17509
1
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=VywZsAGhp0
@inproceedings{ zhou2024deep, title={Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptative Residual Module}, author={Jingbo Zhou and Yixuan Du and Ruqiong Zhang and Jun Xia and Zhizhi Yu and Zelin Zang and Di Jin and Carl Yang and Rui Zhang and Stan Z. Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VywZsAGhp0} }
Graph Neural Networks (GNNs), a type of neural network that can learn from graph-structured data through neighborhood information aggregation, have shown superior performance in various downstream tasks. However, as the number of layers increases, node representations becomes indistinguishable, which is known as over-smoothing. To address this issue, many residual methods have emerged. In this paper, we focus on the over-smoothing issue and related residual methods. Firstly, we revisit over-smoothing from the perspective of overlapping neighborhood subgraphs, and based on this, we explain how residual methods can alleviate over-smoothing by integrating multiple orders neighborhood subgraphs to avoid the indistinguishability of the single high-order neighborhood subgraphs. Additionally, we reveal the drawbacks of previous residual methods, such as the lack of node adaptability and severe loss of high-order neighborhood subgraph information, and propose a \textbf{Posterior-Sampling-based, Node-Adaptive Residual module (PSNR)}. We theoretically demonstrate that PSNR can alleviate the drawbacks of previous residual methods. Furthermore, extensive experiments verify the superiority of the PSNR module in fully observed node classification and missing feature scenarios. Our code is available at \href{https://github.com/jingbo02/PSNR-GNN}{https://github.com/jingbo02/PSNR-GNN}.
Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptative Residual Module
[ "Jingbo Zhou", "Yixuan Du", "Ruqiong Zhang", "Jun Xia", "Zhizhi Yu", "Zelin Zang", "Di Jin", "Carl Yang", "Rui Zhang", "Stan Z. Li" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Vxijl0IOId
@inproceedings{ anh-nguyen2024learning, title={Learning Generalized Linear Programming Value Functions}, author={Tu Anh-Nguyen and Joey Huchette and Christian Tjandraatmadja}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vxijl0IOId} }
We develop a theoretically-grounded learning method for the Generalized Linear Programming Value Function (GVF), which models the optimal value of a linear programming (LP) problem as its objective and constraint bounds vary. This function plays a fundamental role in algorithmic techniques for large-scale optimization, particularly in decomposition for two-stage mixed-integer linear programs (MILPs). This paper establishes a structural characterization of the GVF that enables it to be modeled as a particular neural network architecture, which we then use to learn the GVF in a way that benefits from three notable properties. First, our method produces a true under-approximation of the value function with respect to the constraint bounds. Second, the model is input-convex in the constraint bounds, which not only matches the structure of the GVF but also enables the trained model to be efficiently optimized over using LP. Finally, our learning method is unsupervised, meaning that training data generation does not require computing LP optimal values, which can be prohibitively expensive at large scales. We numerically show that our method can approximate the GVF well, even when compared to supervised methods that collect training data by solving an LP for each data point. Furthermore, as an application of our framework, we develop a fast heuristic method for large-scale two-stage MILPs with continuous second-stage variables, via a compact reformulation that can be solved faster than the full model linear relaxation at large scales and orders of magnitude faster than the original model.
Learning Generalized Linear Programming Value Functions
[ "Tu Anh-Nguyen", "Joey Huchette", "Christian Tjandraatmadja" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=VwUTz2pOnD
@inproceedings{ vakili2024kernelbased, title={Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm}, author={Sattar Vakili and Julia Olkhovskaya}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VwUTz2pOnD} }
Reinforcement Learning (RL) utilizing kernel ridge regression to predict the expected value function represents a powerful method with great representational capacity. This setting is a highly versatile framework amenable to analytical results. We consider kernel-based function approximation for RL in the infinite horizon average reward setting, also referred to as the undiscounted setting. We propose an *optimistic* algorithm, similar to acquisition function based algorithms in the special case of bandits. We establish novel *no-regret* performance guarantees for our algorithm, under kernel-based modelling assumptions. Additionally, we derive a novel confidence interval for the kernel-based prediction of the expected value function, applicable across various RL problems.
Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm
[ "Sattar Vakili", "Julia Olkhovskaya" ]
NeurIPS.cc/2024/Conference
2410.23498
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Vw1V9AgPXW
@inproceedings{ liangqin2024satformer, title={Satformer: Accurate and Robust Traffic Data Estimation for Satellite Networks}, author={liangqin and Xiyuan Liu and Wenting Wei and Liang Chengbin and Huaxi Gu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vw1V9AgPXW} }
The operations and maintenance of satellite networks heavily depend on traffic measurements. Due to the large-scale and highly dynamic nature of satellite networks, global measurement encounters significant challenges in terms of complexity and overhead. Estimating global network traffic data from partial traffic measurements is a promising solution. However, the majority of current estimation methods concentrate on low-rank linear decomposition, which is unable to accurately estimate. The reason lies in its inability to capture the intricate nonlinear spatio-temporal relationship found in large-scale, highly dynamic traffic data. This paper proposes Satformer, an accurate and robust method for estimating traffic data in satellite networks. In Satformer, we innovatively incorporate an adaptive sparse spatio-temporal attention mechanism. In the mechanism, more attention is paid to specific local regions of the input tensor to improve the model's sensitivity on details and patterns. This method enhances its capability to capture nonlinear spatio-temporal relationships. Experiments on small, medium, and large-scale satellite networks datasets demonstrate that Satformer outperforms mathematical and neural baseline methods notably. It provides substantial improvements in reducing errors and maintaining robustness, especially for larger networks. The approach shows promise for deployment in actual systems.
Satformer: Accurate and Robust Traffic Data Estimation for Satellite Networks
[ "liangqin", "Xiyuan Liu", "Wenting Wei", "Liang Chengbin", "Huaxi Gu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Vvcnqs8091
@inproceedings{ qi2024pipeline, title={Pipeline Parallelism with Controllable Memory}, author={Penghui Qi and Xinyi Wan and Nyamdavaa Amar and Min Lin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vvcnqs8091} }
Pipeline parallelism has been widely explored, but most existing schedules lack a systematic methodology. In this paper, we propose a framework to decompose pipeline schedules as repeating a building block, and show that the lifespan of the building block decides the peak activation memory of the pipeline schedule. Guided by the observations, we find that almost all existing pipeline schedules, to the best of our knowledge, are memory inefficient. To address this, we introduce a family of memory efficient building blocks with controllable activation memory, which can reduce the peak activation memory to 1/2 of 1F1B without sacrificing efficiency, and even to 1/3 with comparable throughput. We can also achieve almost zero pipeline bubbles while maintaining the same activation memory as 1F1B. Our evaluations demonstrate that in pure pipeline parallelism settings, our methods outperform 1F1B by from 7\% to 55\% in terms of throughput. When employing a grid search over hybrid parallelism hyperparameters in practical scenarios, our methods demonstrate a 16\% throughput improvement over the 1F1B baseline for large language models. The implementation is open-sourced at https://github.com/sail-sg/zero-bubble-pipeline-parallelism.
Pipeline Parallelism with Controllable Memory
[ "Penghui Qi", "Xinyi Wan", "Nyamdavaa Amar", "Min Lin" ]
NeurIPS.cc/2024/Conference
2405.15362
[ "https://github.com/sail-sg/zero-bubble-pipeline-parallelism" ]
https://huggingface.co/papers/2405.15362
2
3
0
4
[]
[]
[ "sail/pipeline-parallelism-with-controllable-memory" ]
[]
[]
[ "sail/pipeline-parallelism-with-controllable-memory" ]
1
poster
null
https://openreview.net/forum?id=Vtxy8wFpTj
@inproceedings{ yang2024online, title={Online Budgeted Matching with General Bids}, author={Jianyi Yang and Pengfei Li and Adam Wierman and Shaolei Ren}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vtxy8wFpTj} }
Online Budgeted Matching (OBM) is a classic problem with important applications in online advertising, online service matching, revenue management, and beyond. Traditional online algorithms typically assume a small bid setting, where the maximum bid-to-budget ratio ($\kappa$) is infinitesimally small. While recent algorithms have tried to address scenarios with non-small or general bids, they often rely on the Fractional Last Matching (FLM) assumption, which allows for accepting partial bids when the remaining budget is insufficient. This assumption, however, does not hold for many applications with indivisible bids. In this paper, we remove the FLM assumption and tackle the open problem of OBM with general bids. We first establish an upper bound of $1-\kappa$ on the competitive ratio for any deterministic online algorithm. We then propose a novel meta algorithm, called MetaAd, which reduces to different algorithms with first known provable competitive ratios parameterized by the maximum bid-to-budget ratio $\kappa\in [0,1]$. As a by-product, we extend MetaAd to the FLM setting and get provable competitive algorithms. Finally, we apply our competitive analysis to the design learning- augmented algorithms.
Online Budgeted Matching with General Bids
[ "Jianyi Yang", "Pengfei Li", "Adam Wierman", "Shaolei Ren" ]
NeurIPS.cc/2024/Conference
2411.04204
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VrVx83BkQX
@inproceedings{ wachi2024stepwise, title={Stepwise Alignment for Constrained Language Model Policy Optimization}, author={Akifumi Wachi and Thien Q. Tran and Rei Sato and Takumi Tanabe and Youhei Akimoto}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VrVx83BkQX} }
Safety and trustworthiness are indispensable requirements for real-world applications of AI systems using large language models (LLMs). This paper formulates human value alignment as an optimization problem of the language model policy to maximize reward under a safety constraint, and then proposes an algorithm, Stepwise Alignment for Constrained Policy Optimization (SACPO). One key idea behind SACPO, supported by theory, is that the optimal policy incorporating reward and safety can be directly obtained from a reward-aligned policy. Building on this key idea, SACPO aligns LLMs step-wise with each metric while leveraging simple yet powerful alignment algorithms such as direct preference optimization (DPO). SACPO offers several advantages, including simplicity, stability, computational efficiency, and flexibility of algorithms and datasets. Under mild assumptions, our theoretical analysis provides the upper bounds on optimality and safety constraint violation. Our experimental results show that SACPO can fine-tune Alpaca-7B better than the state-of-the-art method in terms of both helpfulness and harmlessness.
Stepwise Alignment for Constrained Language Model Policy Optimization
[ "Akifumi Wachi", "Thien Q. Tran", "Rei Sato", "Takumi Tanabe", "Youhei Akimoto" ]
NeurIPS.cc/2024/Conference
2404.11049
[ "https://github.com/line/sacpo" ]
https://huggingface.co/papers/2404.11049
0
0
0
5
[ "line-corporation/sacpo", "line-corporation/p-sacpo" ]
[]
[]
[ "line-corporation/sacpo", "line-corporation/p-sacpo" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=VqxODXhU4k
@inproceedings{ fonseca2024nonparametric, title={Nonparametric Instrumental Variable Regression through Stochastic Approximate Gradients}, author={Yuri Fonseca and Caio Peixoto and Yuri Saporito}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VqxODXhU4k} }
Instrumental variables (IVs) provide a powerful strategy for identifying causal effects in the presence of unobservable confounders. Within the nonparametric setting (NPIV), recent methods have been based on nonlinear generalizations of Two-Stage Least Squares and on minimax formulations derived from moment conditions or duality. In a novel direction, we show how to formulate a functional stochastic gradient descent algorithm to tackle NPIV regression by directly minimizing the populational risk. We provide theoretical support in the form of bounds on the excess risk, and conduct numerical experiments showcasing our method's superior stability and competitive performance relative to current state-of-the-art alternatives. This algorithm enables flexible estimator choices, such as neural networks or kernel based methods, as well as non-quadratic loss functions, which may be suitable for structural equations beyond the setting of continuous outcomes and additive noise. Finally, we demonstrate this flexibility of our framework by presenting how it naturally addresses the important case of binary outcomes, which has received far less attention by recent developments in the NPIV literature.
Nonparametric Instrumental Variable Regression through Stochastic Approximate Gradients
[ "Yuri Fonseca", "Caio Peixoto", "Yuri Saporito" ]
NeurIPS.cc/2024/Conference
2402.05639
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VqkAKQibpq
@inproceedings{ zheng2024sglang, title={{SGL}ang: Efficient Execution of Structured Language Model Programs}, author={Lianmin Zheng and Liangsheng Yin and Zhiqiang Xie and Chuyue Sun and Jeff Huang and Cody Hao Yu and Shiyi Cao and Christos Kozyrakis and Ion Stoica and Joseph E. Gonzalez and Clark Barrett and Ying Sheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VqkAKQibpq} }
Large language models (LLMs) are increasingly used for complex tasks that require multiple generation calls, advanced prompting techniques, control flow, and structured inputs/outputs. However, efficient systems are lacking for programming and executing these applications. We introduce SGLang, a system for efficient execution of complex language model programs. SGLang consists of a frontend language and a runtime. The frontend simplifies programming with primitives for generation and parallelism control. The runtime accelerates execution with novel optimizations like RadixAttention for KV cache reuse and compressed finite state machines for faster structured output decoding. Experiments show that SGLang achieves up to $6.4\times$ higher throughput compared to state-of-the-art inference systems on various large language and multi-modal models on tasks including agent control, logical reasoning, few-shot learning benchmarks, JSON decoding, retrieval-augmented generation pipelines, and multi-turn chat. The code is publicly available at https://github.com/sgl-project/sglang.
SGLang: Efficient Execution of Structured Language Model Programs
[ "Lianmin Zheng", "Liangsheng Yin", "Zhiqiang Xie", "Chuyue Sun", "Jeff Huang", "Cody Hao Yu", "Shiyi Cao", "Christos Kozyrakis", "Ion Stoica", "Joseph E. Gonzalez", "Clark Barrett", "Ying Sheng" ]
NeurIPS.cc/2024/Conference
2312.07104
[ "https://github.com/sgl-project/sglang" ]
https://huggingface.co/papers/2312.07104
1
7
1
12
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=VqFz7iTGcl
@inproceedings{ darrin2024when, title={When is an Embedding Model More Promising than Another?}, author={Maxime DARRIN and Philippe Formont and Ismail Ben Ayed and Jackie CK Cheung and Pablo Piantanida}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VqFz7iTGcl} }
Embedders play a central role in machine learning, projecting any object into numerical representations that can, in turn, be leveraged to perform various downstream tasks. The evaluation of embedding models typically depends on domain-specific empirical approaches utilizing downstream tasks, primarily because of the lack of a standardized framework for comparison. However, acquiring adequately large and representative datasets for conducting these assessments is not always viable and can prove to be prohibitively expensive and time-consuming. In this paper, we present a unified approach to evaluate embedders. First, we establish theoretical foundations for comparing embedding models, drawing upon the concepts of sufficiency and informativeness. We then leverage these concepts to devise a tractable comparison criterion (information sufficiency), leading to a task-agnostic and self-supervised ranking procedure. We demonstrate experimentally that our approach aligns closely with the capability of embedding models to facilitate various downstream tasks in both natural language processing and molecular biology. This effectively offers practitioners a valuable tool for prioritizing model trials.
When is an Embedding Model More Promising than Another?
[ "Maxime DARRIN", "Philippe Formont", "Ismail Ben Ayed", "Jackie CK Cheung", "Pablo Piantanida" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Vq2kzpig8v
@inproceedings{ zhou2024reciprocal, title={Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents}, author={John Luoyu Zhou and Weizhe Hong and Jonathan Kao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vq2kzpig8v} }
Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents. Instead, naïve reinforcement learning algorithms typically converge to Pareto-dominated outcomes in even the simplest of social dilemmas. An emerging literature on opponent shaping has demonstrated the ability to reach prosocial outcomes by influencing the learning of other agents. However, such methods differentiate through the learning step of other agents or optimize for meta-game dynamics, which rely on privileged access to opponents' learning algorithms or exponential sample complexity, respectively. To provide a learning rule-agnostic and sample-efficient alternative, we introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns. This approach seeks to modify other agents' $Q$-values by increasing their return following beneficial actions (with respect to the Reciprocator) and decreasing it after detrimental actions, guiding them towards mutually beneficial actions without directly differentiating through a model of their policy. We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning. Our code is available at https://github.com/johnlyzhou/reciprocator/.
Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents
[ "John Luoyu Zhou", "Weizhe Hong", "Jonathan Kao" ]
NeurIPS.cc/2024/Conference
2406.01641
[ "https://github.com/johnlyzhou/reciprocator" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VpuOuZOVhP
@inproceedings{ wang2024llmautoda, title={{LLM}-Auto{DA}: Large Language Model-Driven Automatic Data Augmentation for Long-tailed Problems}, author={Pengkun Wang and Zhe Zhao and HaiBin Wen and Fanfu Wang and Binwu Wang and Qingfu Zhang and Yang Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VpuOuZOVhP} }
The long-tailed distribution is the underlying nature of real-world data, and it presents unprecedented challenges for training deep learning models. Existing long-tailed learning paradigms based on re-balancing or data augmentation have partially alleviated the long-tailed problem. However, they still have limitations, such as relying on manually designed augmentation strategies, having a limited search space, and using fixed augmentation strategies. To address these limitations, this paper proposes a novel LLM-based long-tailed data augmentation framework called LLM-AutoDA, which leverages large-scale pretrained models to automatically search for the optimal augmentation strategies suitable for long-tailed data distributions. In addition, it applies this strategy to the original imbalanced data to create an augmented dataset and fine-tune the underlying long-tailed learning model. The performance improvement on the validation set serves as a reward signal to update the generation model, enabling the generation of more effective augmentation strategies in the next iteration. We conducted extensive experiments on multiple mainstream long-tailed learning benchmarks. The results show that LLM-AutoDA outperforms state-of-the-art data augmentation methods and other re-balancing methods significantly.
LLM-AutoDA: Large Language Model-Driven Automatic Data Augmentation for Long-tailed Problems
[ "Pengkun Wang", "Zhe Zhao", "HaiBin Wen", "Fanfu Wang", "Binwu Wang", "Qingfu Zhang", "Yang Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VpINEEVLX0
@inproceedings{ han2024a, title={A Topology-aware Graph Coarsening Framework for Continual Graph Learning}, author={Xiaoxue Han and Zhuo Feng and Yue Ning}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VpINEEVLX0} }
Graph Neural Networks (GNNs) experience "catastrophic forgetting" in continual learning setups, where they tend to lose previously acquired knowledge and perform poorly on old tasks. Rehearsal-based methods, which consolidate old knowledge with a replay memory buffer, are a de facto solution due to their straightforward workflow. However, these methods often fail to adequately capture topological information, leading to incorrect input-label mappings in replay samples. To address this, we propose TACO, a topology-aware graph coarsening and continual learning framework that stores information from previous tasks as a reduced graph. Throughout each learning period, this reduced graph expands by integrating with a new graph and aligning shared nodes, followed by a "zoom-out" reduction process to maintain a stable size. We have developed a graph coarsening algorithm based on node representation proximities to efficiently reduce a graph while preserving essential topological information. We empirically demonstrate that the learning process on the reduced graph can closely approximate that on the original graph. We compare TACO with a wide range of state-of-the-art baselines, proving its superiority and the necessity of preserving high-quality topological information for effective replaying.
A Topology-aware Graph Coarsening Framework for Continual Graph Learning
[ "Xiaoxue Han", "Zhuo Feng", "Yue Ning" ]
NeurIPS.cc/2024/Conference
2401.03077
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Vn0FWRImra
@inproceedings{ tajdini2024nearly, title={Nearly Minimax Optimal Submodular Maximization with Bandit Feedback}, author={Artin Tajdini and Lalit K Jain and Kevin Jamieson}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vn0FWRImra} }
We consider maximizing an unknown monotonic, submodular set function $f: 2^{[n]} \rightarrow [0,1]$ with cardinality constraint under stochastic bandit feedback. At each time $t=1,\dots,T$ the learner chooses a set $S_t \subset [n]$ with $|S_t| \leq k$ and receives reward $f(S_t) + \eta_t$ where $\eta_t$ is mean-zero sub-Gaussian noise. The objective is to minimize the learner's regret with respect to an approximation of the maximum $f(S_*)$ with $|S_*| = k$, obtained through robust greedy maximization of $f$. To date, the best regret bound in the literature scales as $k n^{1/3} T^{2/3}$. And by trivially treating every set as a unique arm one deduces that $\sqrt{ {n \choose k} T }$ is also achievable using standard multi-armed bandit algorithms. In this work, we establish the first minimax lower bound for this setting that scales like $\tilde{\Omega}(\min_{L \le k}(L^{1/3}n^{1/3}T^{2/3} + \sqrt{{n \choose k - L}T}))$. For a slightly restricted algorithm class, we prove a stronger regret lower bound of $\tilde{\Omega}(\min_{L \le k}(Ln^{1/3}T^{2/3} + \sqrt{{n \choose k - L}T}))$. Moreover, we propose an algorithm Sub-UCB that achieves regret $\tilde{\mathcal{O}}(\min_{L \le k}(Ln^{1/3}T^{2/3} + \sqrt{{n \choose k - L}T}))$ capable of matching the lower bound on regret for the restricted class up to logarithmic factors.
Nearly Minimax Optimal Submodular Maximization with Bandit Feedback
[ "Artin Tajdini", "Lalit K Jain", "Kevin Jamieson" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VikufBLOW1
@inproceedings{ caron2024webscale, title={Web-Scale Visual Entity Recognition: An {LLM}-Driven Data Approach}, author={Mathilde Caron and Alireza Fathi and Cordelia Schmid and Ahmet Iscen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VikufBLOW1} }
Web-scale visual entity recognition, the task of associating images with their corresponding entities within vast knowledge bases like Wikipedia, presents significant challenges due to the lack of clean, large-scale training data. In this paper, we propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation. Instead of relying on the multimodal LLM to directly annotate data, which we found to be suboptimal, we prompt it to reason about potential candidate entity labels by accessing additional contextually relevant information (such as Wikipedia), resulting in more accurate annotations. We further use the multimodal LLM to enrich the dataset by generating question-answer pairs and a grounded fine-grained textual description (referred to as "rationale") that explains the connection between images and their assigned entities. Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks (e.g. +6.9% improvement in OVEN entity task), underscoring the importance of high-quality training data in this domain.
Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach
[ "Mathilde Caron", "Alireza Fathi", "Cordelia Schmid", "Ahmet Iscen" ]
NeurIPS.cc/2024/Conference
2410.23676
[ "" ]
https://huggingface.co/papers/2410.23676
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=ViTUlZvPDu
@inproceedings{ zhu2024robust, title={Robust Fine-tuning of Zero-shot Models via Variance Reduction}, author={Beier Zhu and Jiequan Cui and Hanwang Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ViTUlZvPDu} }
When fine-tuning zero-shot models like CLIP, our desideratum is for the fine-tuned model to excel in both in-distribution (ID) and out-of-distribution (OOD). Recently, ensemble-based models (ESM) have been shown to offer significant robustness improvement, while preserving high ID accuracy. However, our study finds that ESMs do not solve the ID-OOD trade-offs: they achieve peak performance for ID and OOD accuracy at different mixing coefficients. When optimized for OOD accuracy, the ensemble model exhibits a noticeable decline in ID accuracy, and vice versa. In contrast, we propose a sample-wise ensembling technique that can simultaneously attain the best ID and OOD accuracy without the trade-offs. Specifically, we construct a Zero-Shot Failure (ZSF) set containing training samples incorrectly predicted by the zero-shot model. For each test sample, we calculate its distance to the ZSF set and assign a higher weight to the fine-tuned model in the ensemble if the distance is small. We term our method Variance Reduction Fine-tuning (VRF), as it effectively reduces the variance in ensemble predictions, thereby decreasing residual error. On ImageNet and five derived distribution shifts, our VRF further improves the OOD accuracy by 1.5 - 2.0 pp over the ensemble baselines while maintaining or increasing ID accuracy. VRF achieves similar large robustness gains on (0.9 - 3.1 pp) on other distribution shifts 19 benchmarks. Codes are available in https://github.com/BeierZhu/VRF.
Robust Fine-tuning of Zero-shot Models via Variance Reduction
[ "Beier Zhu", "Jiequan Cui", "Hanwang Zhang" ]
NeurIPS.cc/2024/Conference
2411.06966
[ "https://github.com/beierzhu/vrf" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Vi8AepAXGy
@inproceedings{ tong2024cambrian, title={Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal {LLM}s}, author={Shengbang Tong and Ellis L Brown II and Penghao Wu and Sanghyun Woo and ADITHYA JAIRAM IYER and Sai Charitha Akula and Shusheng Yang and Jihan Yang and Manoj Middepogu and Ziteng Wang and Xichen Pan and Rob Fergus and Yann LeCun and Saining Xie}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vi8AepAXGy} }
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach. While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research. This gap hinders accurate sensory grounding in real-world scenarios. Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations, offering new insights into different models and architectures—self-supervised, strongly supervised, or combinations thereof—based on experiments with over 15 vision models. We critically examine existing MLLM benchmarks, addressing the difficulties involved in consolidating and interpreting results from various tasks. To further improve visual grounding, we propose spatial vision aggregator (SVA), a dynamic and spatially-aware connector that integrates vision features with LLMs while reducing the number of tokens. Additionally, we discuss the curation of high-quality visual instruction-tuning data from publicly available sources, emphasizing the importance of distribution balancing. Collectively, Cambrian-1 not only achieves state-of-the-art performances but also serves as a comprehensive, open cookbook for instruction-tuned MLLMs. We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes. We hope our release will inspire and accelerate advancements in multimodal systems and visual representation learning.
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
[ "Shengbang Tong", "Ellis L Brown II", "Penghao Wu", "Sanghyun Woo", "ADITHYA JAIRAM IYER", "Sai Charitha Akula", "Shusheng Yang", "Jihan Yang", "Manoj Middepogu", "Ziteng Wang", "Xichen Pan", "Rob Fergus", "Yann LeCun", "Saining Xie" ]
NeurIPS.cc/2024/Conference
2406.16860
[ "https://github.com/cambrian-mllm/cambrian" ]
https://huggingface.co/papers/2406.16860
11
57
4
14
[ "nyu-visionx/cambrian-8b", "nyu-visionx/cambrian-34b", "nyu-visionx/cambrian-13b", "nyu-visionx/cambrian-phi3-3b" ]
[ "nyu-visionx/Cambrian-10M", "nyu-visionx/Cambrian-Alignment", "nyu-visionx/CV-Bench", "magicr/phyworld" ]
[]
[ "nyu-visionx/cambrian-8b", "nyu-visionx/cambrian-34b", "nyu-visionx/cambrian-13b", "nyu-visionx/cambrian-phi3-3b" ]
[ "nyu-visionx/Cambrian-10M", "nyu-visionx/Cambrian-Alignment", "nyu-visionx/CV-Bench", "magicr/phyworld" ]
[]
1
oral
null
https://openreview.net/forum?id=Vhh7ONtfvV
@inproceedings{ balasubramanian2024decomposing, title={Decomposing and Interpreting Image Representations via Text in ViTs Beyond {CLIP}}, author={Sriram Balasubramanian and Samyadeep Basu and Soheil Feizi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Vhh7ONtfvV} }
Recent work has explored how individual components of the CLIP-ViT model contribute to the final representation by leveraging the shared image-text representation space of CLIP. These components, such as attention heads and MLPs, have been shown to capture distinct image features like shape, color or texture. However, understanding the role of these components in arbitrary vision transformers (ViTs) is challenging. To this end, we introduce a general framework which can identify the roles of various components in ViTs beyond CLIP. Specifically, we (a) automate the decomposition of the final representation into contributions from different model components, and (b) linearly map these contributions to CLIP space to interpret them via text. Additionally, we introduce a novel scoring function to rank components by their importance with respect to specific features. Applying our framework to various ViT variants (e.g. DeiT, DINO, DINOv2, Swin, MaxViT), we gain insights into the roles of different components concerning particular image features. These insights facilitate applications such as image retrieval using text descriptions or reference images, visualizing token importance heatmaps, and mitigating spurious correlations. We release our [code](https://github.com/SriramB-98/vit-decompose) to reproduce the experiments in the paper.
Decomposing and Interpreting Image Representations via Text in ViTs Beyond CLIP
[ "Sriram Balasubramanian", "Samyadeep Basu", "Soheil Feizi" ]
NeurIPS.cc/2024/Conference
2406.01583
[ "https://github.com/sriramb-98/vit-decompose" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VcPtU8e6yK
@inproceedings{ li2024textitbifrost, title={\${\textbackslash}textit\{Bifr{\textbackslash}''ost\}\$: 3D-Aware Image Compositing with Language Instructions}, author={Lingxiao Li and Kaixiong Gong and Wei-Hong Li and Xili Dai and Tao Chen and Xiaojun Yuan and Xiangyu Yue}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VcPtU8e6yK} }
This paper introduces $\textit{Bifröst}$, a novel 3D-aware framework that is built upon diffusion models to perform instruction-based image composition. Previous methods concentrate on image compositing at the 2D level, which fall short in handling complex spatial relationships ($\textit{e.g.}$, occlusion). $\textit{Bifröst}$ addresses these issues by training MLLM as a 2.5D location predictor and integrating depth maps as an extra condition during the generation process to bridge the gap between 2D and 3D, which enhances spatial comprehension and supports sophisticated spatial interactions. Our method begins by fine-tuning MLLM with a custom counterfactual dataset to predict 2.5D object locations in complex backgrounds from language instructions. Then, the image-compositing model is uniquely designed to process multiple types of input features, enabling it to perform high-fidelity image compositions that consider occlusion, depth blur, and image harmonization. Extensive qualitative and quantitative evaluations demonstrate that $\textit{Bifröst}$ significantly outperforms existing methods, providing a robust solution for generating realistically composited images in scenarios demanding intricate spatial understanding. This work not only pushes the boundaries of generative image compositing but also reduces reliance on expensive annotated datasets by effectively utilizing existing resources in innovative ways.
Bifröst: 3D-Aware Image Compositing with Language Instructions
[ "Lingxiao Li", "Kaixiong Gong", "Wei-Hong Li", "Xili Dai", "Tao Chen", "Xiaojun Yuan", "Xiangyu Yue" ]
NeurIPS.cc/2024/Conference
2410.19079
[ "https://github.com/lingxiao-li/bifrost" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VazkRbCGxt
@inproceedings{ lee2024direct, title={Direct Consistency Optimization for Robust Customization of Text-to-Image Diffusion models}, author={Kyungmin Lee and Sangkyung Kwak and Kihyuk Sohn and Jinwoo Shin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VazkRbCGxt} }
Text-to-image (T2I) diffusion models, when fine-tuned on a few personal images, can generate visuals with a high degree of consistency. However, such fine-tuned models are not robust; they often fail to compose with concepts of pretrained model or other fine-tuned models. To address this, we propose a novel fine-tuning objective, dubbed Direct Consistency Optimization, which controls the deviation between fine-tuning and pretrained models to retain the pretrained knowledge during fine-tuning. Through extensive experiments on subject and style customization, we demonstrate that our method positions itself on a superior Pareto frontier between subject (or style) consistency and image-text alignment over all previous baselines; it not only outperforms regular fine-tuning objective in image-text alignment, but also shows higher fidelity to the reference images than the method that fine-tunes with additional prior dataset. More importantly, the models fine-tuned with our method can be merged without interference, allowing us to generate custom subjects in a custom style by composing separately customized subject and style models. Notably, we show that our approach achieves better prompt fidelity and subject fidelity than those post-optimized for merging regular fine-tuned models.
Direct Consistency Optimization for Robust Customization of Text-to-Image Diffusion models
[ "Kyungmin Lee", "Sangkyung Kwak", "Kihyuk Sohn", "Jinwoo Shin" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VaXnxQ3UKo
@inproceedings{ chen2024alphamath, title={AlphaMath Almost Zero: Process Supervision without Process}, author={Guoxin Chen and Minpeng Liao and Chengxi Li and Kai Fan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VaXnxQ3UKo} }
Although recent advancements in large language models (LLMs) have significantly improved their performance on various tasks, they still face challenges with complex and symbolic multi-step reasoning, particularly in mathematical reasoning. To bolster the mathematical reasoning capabilities of LLMs, most existing efforts concentrate on seeking assistance from either domain experts or GPT-4 for high-quality process-supervised data, which is not only expensive but also labor-intensive. In our study, we propose an innovative framework, AlphaMath, that bypasses the need for process annotations (from humans or GPTs) by leveraging Monte Carlo Tree Search (MCTS). This framework focuses on unleashing the potential of a well-pretrained LLM to autonomously enhance its mathematical reasoning. Specifically, we integrate a value model with the LLM, automatically generating both process supervision and step-level evaluation signals in MCTS. Furthermore, we propose an efficient inference strategy—step-level beam search, where the value model is crafted to assist the policy model (i.e., LLM) in navigating more effective reasoning paths, rather than solely relying on prior probabilities. The experimental results on both in-domain and out-of-domain datasets demonstrate that even without GPT-4 or human-annotated process supervision, our AlphaMath framework achieves comparable or superior results to previous state-of-the-art methods.
AlphaMath Almost Zero: Process Supervision without Process
[ "Guoxin Chen", "Minpeng Liao", "Chengxi Li", "Kai Fan" ]
NeurIPS.cc/2024/Conference
2405.03553
[ "https://github.com/MARIO-Math-Reasoning/Super_MARIO" ]
https://huggingface.co/papers/2405.03553
0
0
0
4
[]
[ "MARIO-Math-Reasoning/AlphaMath-Trainset" ]
[]
[]
[ "MARIO-Math-Reasoning/AlphaMath-Trainset" ]
[]
1
poster
null
https://openreview.net/forum?id=VaLAWrLHJv
@inproceedings{ wang2024loraga, title={Lo{RA}-{GA}: Low-Rank Adaptation with Gradient Approximation}, author={Shaowen Wang and Linxi Yu and Jian Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VaLAWrLHJv} }
Fine-tuning large-scale pretrained models is prohibitively expensive in terms of computational and memory costs. LoRA, as one of the most popular Parameter-Efficient Fine-Tuning (PEFT) methods, offers a cost-effective alternative by fine-tuning an auxiliary low-rank model that has significantly fewer parameters. Although LoRA reduces the computational and memory requirements significantly at each iteration, extensive empirical evidence indicates that it converges at a considerably slower rate compared to full fine-tuning, ultimately leading to increased overall compute and often worse test performance. In our paper, we perform an in-depth investigation of the initialization method of LoRA and show that careful initialization (without any change of the architecture and the training algorithm) can significantly enhance both efficiency and performance. In particular, we introduce a novel initialization method, LoRA-GA (Low Rank Adaptation with Gradient Approximation), which aligns the gradients of low-rank matrix product with those of full fine-tuning at the first step. Our extensive experiments demonstrate that LoRA-GA achieves a convergence rate comparable to that of full fine-tuning (hence being significantly faster than vanilla LoRA as well as various recent improvements) while simultaneously attaining comparable or even better performance. For example, on the subset of the GLUE dataset with T5-Base, LoRA-GA outperforms LoRA by 5.69% on average. On larger models such as Llama 2-7B, LoRA-GA shows performance improvements of 0.34, 11.52%, and 5.05% on MTbench, GSM8k, and Human-eval, respectively. Additionally, we observe up to 2-4 times convergence speed improvement compared to vanilla LoRA, validating its effectiveness in accelerating convergence and enhancing model performance.
LoRA-GA: Low-Rank Adaptation with Gradient Approximation
[ "Shaowen Wang", "Linxi Yu", "Jian Li" ]
NeurIPS.cc/2024/Conference
2407.05000
[ "https://github.com/outsider565/lora-ga" ]
https://huggingface.co/papers/2407.05000
0
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=VaJ4XOW7Ey
@inproceedings{ riemer2024balancing, title={Balancing Context Length and Mixing Times for Reinforcement Learning at Scale}, author={Matthew Riemer and Khimya Khetarpal and Janarthanan Rajendran and Sarath Chandar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VaJ4XOW7Ey} }
Due to the recent remarkable advances in artificial intelligence, researchers have begun to consider challenging learning problems such as learning to generalize behavior from large offline datasets or learning online in non-Markovian environments. Meanwhile, recent advances in both of these areas have increasingly relied on conditioning policies on large context lengths. A natural question is if there is a limit to the performance benefits of increasing the context length if the computation needed is available. In this work, we establish a novel theoretical result that links the context length of a policy to the time needed to reliably evaluate its performance (i.e., its mixing time) in large scale partially observable reinforcement learning environments that exhibit latent sub-task structure. This analysis underscores a key tradeoff: when we extend the context length, our policy can more effectively model non-Markovian dependencies, but this comes at the cost of potentially slower policy evaluation and as a result slower downstream learning. Moreover, our empirical results highlight the relevance of this analysis when leveraging Transformer based neural networks. This perspective will become increasingly pertinent as the field scales towards larger and more realistic environments, opening up a number of potential future directions for improving the way we design learning agents.
Balancing Context Length and Mixing Times for Reinforcement Learning at Scale
[ "Matthew Riemer", "Khimya Khetarpal", "Janarthanan Rajendran", "Sarath Chandar" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VZQmIoDGBG
@inproceedings{ yin2024safeworld, title={SafeWorld: Geo-Diverse Safety Alignment}, author={Da Yin and Haoyi Qiu and Kung-Hsiang Huang and Kai-Wei Chang and Nanyun Peng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VZQmIoDGBG} }
In the rapidly evolving field of Large Language Models (LLMs), ensuring safety is a crucial and widely discussed topic. However, existing works often overlooks the geo-diversity of cultural and legal standards across the world. To reveal the chal5 lenges posed by geo-diverse safety standards, we introduce SafeWorld, a novel benchmark specifically designed to evaluate LLMs’ ability to generate responses that are not only helpful but also culturally sensitive and legally compliant across diverse global contexts. SafeWorld encompasses 2,775 test user queries, each grounded in high-quality, human-verified cultural norms and legal policies from 50 countries and 493 regions/races. On top of it, we propose a multi-dimensional automatic safety evaluation framework that assesses the contextual appropriateness, accuracy, and comprehensiveness of responses. Our evaluations reveal that current LLMs struggle to meet these criteria effectively. To enhance LLMs’ alignment with geo-diverse safety standards, we synthesize helpful preference pairs for Direct Preference Optimization (DPO) alignment. The preference pair construction aims to encourage LLMs to behave appropriately and provide precise references to relevant cultural norms and policies when necessary. Our trained SafeWorldLM outperforms all competing models, including GPT-4o on all the three evaluation dimensions by a large margin. Global human evaluators also note a nearly 20% higher winning rate in helpfulness and harmfulness evaluation.
SafeWorld: Geo-Diverse Safety Alignment
[ "Da Yin", "Haoyi Qiu", "Kung-Hsiang Huang", "Kai-Wei Chang", "Nanyun Peng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VXxj3XZ1X8
@inproceedings{ turishcheva2024reproducibility, title={Reproducibility of predictive networks for mouse visual cortex}, author={Polina Turishcheva and Max F Burg and Fabian H. Sinz and Alexander S Ecker}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VXxj3XZ1X8} }
Deep predictive models of neuronal activity have recently enabled several new discoveries about the selectivity and invariance of neurons in the visual cortex. These models learn a shared set of nonlinear basis functions, which are linearly combined via a learned weight vector to represent a neuron's function. Such weight vectors, which can be thought as embeddings of neuronal function, have been proposed to define functional cell types via unsupervised clustering. However, as deep models are usually highly overparameterized, the learning problem is unlikely to have a unique solution, which raises the question if such embeddings can be used in a meaningful way for downstream analysis. In this paper, we investigate how stable neuronal embeddings are with respect to changes in model architecture and initialization. We find that $L_1$ regularization to be an important ingredient for structured embeddings and develop an adaptive regularization that adjusts the strength of regularization per neuron. This regularization improves both predictive performance and how consistently neuronal embeddings cluster across model fits compared to uniform regularization. To overcome overparametrization, we propose an iterative feature pruning strategy which reduces the dimensionality of performance-optimized models by half without loss of performance and improves the consistency of neuronal embeddings with respect to clustering neurons. Our results suggest that to achieve an objective taxonomy of cell types or a compact representation of the functional landscape, we need novel architectures or learning techniques that improve identifiability. The code is available https://github.com/pollytur/readout_reproducibility.
Reproducibility of predictive networks for mouse visual cortex
[ "Polina Turishcheva", "Max F Burg", "Fabian H. Sinz", "Alexander S Ecker" ]
NeurIPS.cc/2024/Conference
2406.12625
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=VXJVNdmXO4
@inproceedings{ lu2024data, title={Data Acquisition via Experimental Design for Data Markets}, author={Charles Lu and Baihe Huang and Sai Praneeth Karimireddy and Praneeth Vepakomma and Michael Jordan and Ramesh Raskar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VXJVNdmXO4} }
The acquisition of training data is crucial for machine learning applications. Data markets can increase the supply of data, particularly in data-scarce domains such as healthcare, by incentivizing potential data providers to join the market. A major challenge for a data buyer in such a market is choosing the most valuable data points from a data seller. Unlike prior work in data valuation, which assumes centralized data access, we propose a federated approach to the data acquisition problem that is inspired by linear experimental design. Our proposed data acquisition method achieves lower prediction error without requiring labeled validation data and can be optimized in a fast and federated procedure. The key insight of our work is that a method that directly estimates the benefit of acquiring data for test set prediction is particularly compatible with a decentralized market setting.
Data Acquisition via Experimental Design for Data Markets
[ "Charles Lu", "Baihe Huang", "Sai Praneeth Karimireddy", "Praneeth Vepakomma", "Michael Jordan", "Ramesh Raskar" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VWf6ZVx5S2
@inproceedings{ zhong2024transforming, title={Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learner}, author={Hanwen Zhong and Jiaxin Chen and Yutong Zhang and Di Huang and Yunhong Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VWf6ZVx5S2} }
Multi-Task Learning (MTL) for Vision Transformer aims at enhancing the model capability by tackling multiple tasks simultaneously. Most recent works have predominantly focused on designing Mixture-of-Experts (MoE) structures and integrating Low-Rank Adaptation (LoRA) to efficiently perform multi-task learning. However, their rigid combination hampers both the optimization of MoE and the effectiveness of reparameterization of LoRA, leading to sub-optimal performance and low inference speed. In this work, we propose a novel approach dubbed Efficient Multi-Task Learning (EMTAL) by transforming a pre-trained Vision Transformer into an efficient multi-task learner during training, and reparameterizing the learned structure for efficient inference. Specifically, we firstly develop the MoEfied LoRA structure, which decomposes the pre-trained Transformer into a low-rank MoE structure and employ LoRA to fine-tune the parameters. Subsequently, we take into account the intrinsic asynchronous nature of multi-task learning and devise a learning Quality Retaining (QR) optimization mechanism, by leveraging the historical high-quality class logits to prevent a well-trained task from performance degradation. Finally, we design a router fading strategy to integrate the learned parameters into the original Transformer, archiving efficient inference. Extensive experiments on public benchmarks demonstrate the superiority of our method, compared to the state-of-the-art multi-task learning approaches.
Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learner
[ "Hanwen Zhong", "Jiaxin Chen", "Yutong Zhang", "Di Huang", "Yunhong Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VVd3iOKPMJ
@inproceedings{ konuk2024learning, title={Learning from Offline Foundation Features with Tensor Augmentations}, author={Emir Konuk and Christos Matsoukas and Moein Sorkhei and Phitchapha Lertsiravarameth and Kevin Smith}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VVd3iOKPMJ} }
We introduce Learning from Offline Foundation Features with Tensor Augmentations (LOFF-TA), an efficient training scheme designed to harness the capabilities of foundation models in limited resource settings where their direct development is not feasible. LOFF-TA involves training a compact classifier on cached feature embeddings from a frozen foundation model, resulting in up to $37\times$ faster training and up to $26\times$ reduced GPU memory usage. Because the embeddings of augmented images would be too numerous to store, yet the augmentation process is essential for training, we propose to apply tensor augmentations to the cached embeddings of the original non-augmented images. LOFF-TA makes it possible to leverage the power of foundation models, regardless of their size, in settings with limited computational capacity. Moreover, LOFF-TA can be used to apply foundation models to high-resolution images without increasing compute. In certain scenarios, we find that training with LOFF-TA yields better results than directly fine-tuning the foundation model.
Learning from Offline Foundation Features with Tensor Augmentations
[ "Emir Konuk", "Christos Matsoukas", "Moein Sorkhei", "Phitchapha Lertsiravarameth", "Kevin Smith" ]
NeurIPS.cc/2024/Conference
2410.02527
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VUuOsBrqaw
@inproceedings{ zhao2024fug, title={{FUG}: Feature-Universal Graph Contrastive Pre-training for Graphs with Diverse Node Features}, author={Jitao Zhao and Di Jin and Meng Ge and Lianze Shan and Xin Wang and Dongxiao He and Zhiyong Feng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VUuOsBrqaw} }
Graph Neural Networks (GNNs), known for their effective graph encoding, are extensively used across various fields. Graph self-supervised pre-training, which trains GNN encoders without manual labels to generate high-quality graph representations, has garnered widespread attention. However, due to the inherent complex characteristics in graphs, GNNs encoders pre-trained on one dataset struggle to directly adapt to others that have different node feature shapes. This typically necessitates either model rebuilding or data alignment. The former results in non-transferability as each dataset need to rebuild a new model, while the latter brings serious knowledge loss since it forces features into a uniform shape by preprocessing such as Principal Component Analysis (PCA). To address this challenge, we propose a new Feature-Universal Graph contrastive pre-training strategy (FUG) that naturally avoids the need for model rebuilding and data reshaping. Specifically, inspired by discussions in existing work on the relationship between contrastive Learning and PCA, we conducted a theoretical analysis and discovered that PCA's optimization objective is a special case of that in contrastive Learning. We designed an encoder with contrastive constraints to emulate PCA's generation of basis transformation matrix, which is utilized to losslessly adapt features in different datasets. Furthermore, we introduced a global uniformity constraint to replace negative sampling, reducing the time complexity from $O(n^2)$ to $O(n)$, and by explicitly defining positive samples, FUG avoids the substantial memory requirements of data augmentation. In cross domain experiments, FUG has a performance close to the re-trained new models. The source code is available at: https://github.com/hedongxiao-tju/FUG.
FUG: Feature-Universal Graph Contrastive Pre-training for Graphs with Diverse Node Features
[ "Jitao Zhao", "Di Jin", "Meng Ge", "Lianze Shan", "Xin Wang", "Dongxiao He", "Zhiyong Feng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VUgXAWOCQz
@inproceedings{ kamoutsi2024randomized, title={Randomized algorithms and {PAC} bounds for inverse reinforcement learning in continuous spaces}, author={Angeliki Kamoutsi and Peter Schmitt-F{\"o}rster and Tobias Sutter and Volkan Cevher and John Lygeros}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VUgXAWOCQz} }
This work studies discrete-time discounted Markov decision processes with continuous state and action spaces and addresses the inverse problem of inferring a cost function from observed optimal behavior. We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem by using occupation measures, linear duality, and complementary slackness conditions. To avoid trivial solutions and ill-posedness, we introduce a natural linear normalization constraint. This results in an infinite-dimensional linear feasibility problem, prompting a thorough analysis of its properties. Next, we use linear function approximators and adopt a randomized approach, namely the scenario approach and related probabilistic feasibility guarantees, to derive $\varepsilon$-optimal solutions for the inverse problem. We further discuss the sample complexity for a desired approximation accuracy. Finally, we deal with the more realistic case where we only have access to a finite set of expert demonstrations and a generative model and provide bounds on the error made when working with samples.
Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces
[ "Angeliki Kamoutsi", "Peter Schmitt-Förster", "Tobias Sutter", "Volkan Cevher", "John Lygeros" ]
NeurIPS.cc/2024/Conference
2405.15509
[ "https://github.com/RAPACIRLCS/code" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VUWvVvNi6r
@inproceedings{ teo2024unveiling, title={Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis}, author={Rachel Teo and Tan Minh Nguyen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VUWvVvNi6r} }
The remarkable success of transformers in sequence modeling tasks, spanning various applications in natural language processing and computer vision, is attributed to the critical role of self-attention. Similar to the development of most deep learning models, the construction of these attention mechanisms relies on heuristics and experience. In our work, we derive self-attention from kernel principal component analysis (kernel PCA) and show that self-attention projects its query vectors onto the principal component axes of its key matrix in a feature space. We then formulate the exact formula for the value matrix in self-attention, theoretically and empirically demonstrating that this value matrix captures the eigenvectors of the Gram matrix of the key vectors in self-attention. Leveraging our kernel PCA framework, we propose Attention with Robust Principal Components (RPC-Attention), a novel class of robust attention that is resilient to data contamination. We empirically demonstrate the advantages of RPC-Attention over softmax attention on the ImageNet-1K object classification, WikiText-103 language modeling, and ADE20K image segmentation task.
Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis
[ "Rachel Teo", "Tan Minh Nguyen" ]
NeurIPS.cc/2024/Conference
2406.13762
[ "https://github.com/rachtsy/kpca_code" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VUBtAcQN44
@inproceedings{ chen2024a, title={A Motion-aware Spatio-temporal Graph for Video Salient Object Ranking}, author={Hao Chen and Yufei Zhu and Yongjian Deng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VUBtAcQN44} }
Video salient object ranking aims to simulate the human attention mechanism by dynamically prioritizing the visual attraction of objects in a scene over time. Despite its numerous practical applications, this area remains underexplored. In this work, we propose a graph model for video salient object ranking. This graph simultaneously explores multi-scale spatial contrasts and intra-/inter-instance temporal correlations across frames to extract diverse spatio-temporal saliency cues. It has two advantages: 1. Unlike previous methods that only perform global inter-frame contrast or compare all proposals across frames globally, we explicitly model the motion of each instance by comparing its features with those in the same spatial region in adjacent frames, thus obtaining more accurate motion saliency cues. 2. We synchronize the spatio-temporal saliency cues in a single graph for joint optimization, which exhibits better dynamics compared to the previous stage-wise methods that prioritize spatial cues followed by temporal cues. Additionally, we propose a simple yet effective video retargeting method based on video saliency ranking. Extensive experiments demonstrate the superiority of our model in video salient object ranking and the effectiveness of the video retargeting method. Our codes/models are released at [https://github.com/zyf-815/VSOR/tree/main](https://github.com/zyf-815/VSOR/tree/main).
A Motion-aware Spatio-temporal Graph for Video Salient Object Ranking
[ "Hao Chen", "Yufei Zhu", "Yongjian Deng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VTJvTa41D0
@inproceedings{ zhang2024stability, title={Stability and Generalizability in {SDE} Diffusion Models with Measure-Preserving Dynamics}, author={Weitong Zhang and Chengqi Zang and Liu Li and Sarah Cechnicka and Cheng Ouyang and Bernhard Kainz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VTJvTa41D0} }
Inverse problems describe the process of estimating the causal factors from a set of measurements or data. Mapping of often incomplete or degraded data to parameters is ill-posed, thus data-driven iterative solutions are required, for example when reconstructing clean images from poor signals. Diffusion models have shown promise as potent generative tools for solving inverse problems due to their superior reconstruction quality and their compatibility with iterative solvers. However, most existing approaches are limited to linear inverse problems represented as Stochastic Differential Equations (SDEs). This simplification falls short of addressing the challenging nature of real-world problems, leading to amplified cumulative errors and biases. We provide an explanation for this gap through the lens of measure-preserving dynamics of Random Dynamical Systems (RDS) with which we analyse Temporal Distribution Discrepancy and thus introduce a theoretical framework based on RDS for SDE diffusion models. We uncover several strategies that inherently enhance the stability and generalizability of diffusion models for inverse problems and introduce a novel score-based diffusion framework, the Dynamics-aware SDE Diffusion Generative Model (D^3GM). The Measure-preserving property can return the degraded measurement to the original state despite complex degradation with the RDS concept of stability. Our extensive experimental results corroborate the effectiveness of D^3GM across multiple benchmarks including a prominent application for inverse problems, magnetic resonance imaging.
Stability and Generalizability in SDE Diffusion Models with Measure-Preserving Dynamics
[ "Weitong Zhang", "Chengqi Zang", "Liu Li", "Sarah Cechnicka", "Cheng Ouyang", "Bernhard Kainz" ]
NeurIPS.cc/2024/Conference
2406.13652
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VSz9na5Jtl
@inproceedings{ ban2024pagerank, title={PageRank Bandits for Link Prediction}, author={Yikun Ban and Jiaru Zou and Zihao Li and Yunzhe Qi and Dongqi Fu and Jian Kang and Hanghang Tong and Jingrui He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VSz9na5Jtl} }
Link prediction is a critical problem in graph learning with broad applications such as recommender systems and knowledge graph completion. Numerous research efforts have been directed at solving this problem, including approaches based on similarity metrics and Graph Neural Networks (GNN). However, most existing solutions are still rooted in conventional supervised learning, which makes it challenging to adapt over time to changing customer interests and to address the inherent dilemma of exploitation versus exploration in link prediction. To tackle these challenges, this paper reformulates link prediction as a sequential decision-making process, where each link prediction interaction occurs sequentially. We propose a novel fusion algorithm, PRB (PageRank Bandits), which is the first to combine contextual bandits with PageRank for collaborative exploitation and exploration. We also introduce a new reward formulation and provide a theoretical performance guarantee for PRB. Finally, we extensively evaluate PRB in both online and offline settings, comparing it with bandit-based and graph-based methods. The empirical success of PRB demonstrates the value of the proposed fusion approach. Our code is released at https://github.com/jiaruzouu/PRB.
PageRank Bandits for Link Prediction
[ "Yikun Ban", "Jiaru Zou", "Zihao Li", "Yunzhe Qi", "Dongqi Fu", "Jian Kang", "Hanghang Tong", "Jingrui He" ]
NeurIPS.cc/2024/Conference
2411.01410
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VRRvJnxgQe
@inproceedings{ wang2024noisegpt, title={Noise{GPT}: Label Noise Detection and Rectification through Probability Curvature}, author={Haoyu Wang and Zhuo Huang and Zhiwei Lin and Tongliang Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VRRvJnxgQe} }
Machine learning craves high-quality data which is a major bottleneck during realistic deployment, as it takes abundant resources and massive human labor to collect and label data. Unfortunately, label noise where image data mismatches with incorrect label exists ubiquitously in all kinds of datasets, significantly degrading the learning performance of deep networks. Learning with Label Noise (LNL) has been a common strategy for mitigating the influence of noisy labels. However, existing LNL methods either require pertaining using the memorization effect to separate clean data from noisy ones or rely on dataset assumptions that cannot extend to various scenarios. Thanks to the development of Multimodal Large Language Models (MLLMs) which possess massive knowledge and hold In-Context Learning (ICL) ability, this paper proposes NoiseGPT to effectively leverage MLLMs as a knowledge expert for conducting label noise detection and rectification. Specifically, we observe a probability curvature effect of MLLMs where clean and noisy examples reside on curvatures with different smoothness, further enabling the detection of label noise. By designing a token-wise Mix-of Feature (MoF) technique to produce the curvature, we propose an In-Context Discrepancy (ICD) measure to determine the authenticity of an image-label pair. Subsequently, we repeat such a process to find the best matching pairs to complete our label rectification. Through extensive experiments, we carefully demonstrate the effectiveness of NoiseGPT on detecting and cleansing dataset noise, especially on ILSVRC12, the AUROC of NoiseGPT reached over 0.92. And by integrating with existing methods, the classification performance can be significantly improved on noisy datasets, typically by 22.8% on 80% symmetric CIFAR-10 with M-correction.
NoiseGPT: Label Noise Detection and Rectification through Probability Curvature
[ "Haoyu Wang", "Zhuo Huang", "Zhiwei Lin", "Tongliang Liu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VR2RdSxtzs
@inproceedings{ lei2024macm, title={{MACM}: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems}, author={Bin Lei and Yi Zhang and Shan Zuo and Ali Payani and Caiwen Ding}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VR2RdSxtzs} }
Recent advancements in large language models, such as GPT-4, have demonstrated remarkable capabilities in processing standard queries. Despite these advancements, their performance substantially declines in advanced mathematical problems requiring complex, multi-step logical reasoning. To enhance their inferential capabilities, current research has delved into prompting engineering, exemplified by methodologies such as the Tree of Thought and Graph of Thought. Nonetheless, these existing approaches encounter two significant limitations. Firstly, their effectiveness in tackling complex mathematical problems is somewhat constrained. Secondly, the necessity to design distinct prompts for individual problems hampers their generalizability. In response to these limitations, this paper introduces the Multi-Agent System for conditional Mining (MACM) prompting method. It not only resolves intricate mathematical problems but also demonstrates strong generalization capabilities across various mathematical contexts. With the assistance of MACM, the accuracy of GPT-4 Turbo on the most challenging level five mathematical problems in the MATH dataset increase from $\mathbf{54.68\\%} \text{ to } \mathbf{76.73\\%}$.
MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems
[ "Bin Lei", "Yi Zhang", "Shan Zuo", "Ali Payani", "Caiwen Ding" ]
NeurIPS.cc/2024/Conference
2404.04735
[ "https://github.com/bin123apple/macm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VQyb9LKmUH
@inproceedings{ cui2024a, title={A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning}, author={Yuanning Cui and Zequn Sun and Wei Hu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VQyb9LKmUH} }
Extensive knowledge graphs (KGs) have been constructed to facilitate knowledge-driven tasks across various scenarios. However, existing work usually develops separate reasoning models for different KGs, lacking the ability to generalize and transfer knowledge across diverse KGs and reasoning settings. In this paper, we propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability. Specifically, we introduce a prompt graph centered with a query-related example fact as context to understand the query relation. To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer that maps entities and relations in prompt graphs to predefined tokens. Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively. We conduct evaluation on 43 different KGs in both transductive and inductive settings. Results indicate that the proposed KG-ICL outperforms baselines on most datasets, showcasing its outstanding generalization and universal reasoning capabilities. The source code is accessible on GitHub: https://github.com/nju-websoft/KG-ICL.
A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning
[ "Yuanning Cui", "Zequn Sun", "Wei Hu" ]
NeurIPS.cc/2024/Conference
2410.12288
[ "https://github.com/nju-websoft/KG-ICL" ]
https://huggingface.co/papers/2410.12288
0
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=VPSx3n6ICE
@inproceedings{ xu2024privcirnet, title={PrivCirNet: Efficient Private Inference via Block Circulant Transformation}, author={Tianshi Xu and Lemeng Wu and Runsheng Wang and Meng Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VPSx3n6ICE} }
Homomorphic encryption (HE)-based deep neural network (DNN) inference protects data and model privacy but suffers from significant computation overhead. We observe transforming the DNN weights into circulant matrices converts general matrix-vector multiplications into HE-friendly 1-dimensional convolutions, drastically reducing the HE computation cost. Hence, in this paper, we propose PrivCirNet, a protocol/network co-optimization framework based on block circulant transformation. At the protocol level, PrivCirNet customizes the HE encoding algorithm that is fully compatible with the block circulant transformation and reduces the computation latency in proportion to the block size. At the network level, we propose a latency-aware formulation to search for the layer-wise block size assignment based on second-order information. PrivCirNet also leverages layer fusion to further reduce the inference cost. We compare PrivCirNet with the state-of-the-art HE-based framework Bolt (IEEE S\&P 2024) and HE-friendly pruning method SpENCNN (ICML 2023). For ResNet-18 and Vision Transformer (ViT) on Tiny ImageNet, PrivCirNet reduces latency by $5.0\times$ and $1.3\times$ with iso-accuracy over Bolt, respectively, and improves accuracy by $4.1$\% and $12$\% over SpENCNN, respectively. For MobileNetV2 on ImageNet, PrivCirNet achieves $1.7\times$ lower latency and $4.2$\% better accuracy over Bolt and SpENCNN, respectively. Our code and checkpoints are available on Git Hub.
PrivCirNet: Efficient Private Inference via Block Circulant Transformation
[ "Tianshi Xu", "Lemeng Wu", "Runsheng Wang", "Meng Li" ]
NeurIPS.cc/2024/Conference
2405.14569
[ "https://github.com/tianshi-xu/privcirnet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VOVyeOzZx0
@inproceedings{ polo2024weak, title={Weak Supervision Performance Evaluation via Partial Identification}, author={Felipe Maia Polo and Subha Maity and Mikhail Yurochkin and Moulinath Banerjee and Yuekai Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VOVyeOzZx0} }
Programmatic Weak Supervision (PWS) enables supervised model training without direct access to ground truth labels, utilizing weak labels from heuristics, crowdsourcing, or pre-trained models. However, the absence of ground truth complicates model evaluation, as traditional metrics such as accuracy, precision, and recall cannot be directly calculated. In this work, we present a novel method to address this challenge by framing model evaluation as a partial identification problem and estimating performance bounds using Fréchet bounds. Our approach derives reliable bounds on key metrics without requiring labeled data, overcoming core limitations in current weak supervision evaluation techniques. Through scalable convex optimization, we obtain accurate and computationally efficient bounds for metrics including accuracy, precision, recall, and F1-score, even in high-dimensional settings. This framework offers a robust approach to assessing model quality without ground truth labels, enhancing the practicality of weakly supervised learning for real-world applications.
Weak Supervision Performance Evaluation via Partial Identification
[ "Felipe Maia Polo", "Subha Maity", "Mikhail Yurochkin", "Moulinath Banerjee", "Yuekai Sun" ]
NeurIPS.cc/2024/Conference
2312.04601
[ "https://github.com/felipemaiapolo/wsbounds" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VNmi0FHn6Z
@inproceedings{ sale2024the, title={The Bayesian sampling in a canonical recurrent circuit with a diversity of inhibitory interneurons}, author={Eryn Sale and Wenhao Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VNmi0FHn6Z} }
Accumulating evidence suggests stochastic cortical circuits can perform sampling-based Bayesian inference to compute the latent stimulus posterior. Canonical cortical circuits consist of excitatory (E) neurons and types of inhibitory (I) interneurons. Nevertheless, nearly no sampling neural circuit models consider the diversity of interneurons, and thus how interneurons contribute to sampling remains poorly understood. To provide theoretical insight, we build a nonlinear canonical circuit model consisting of recurrently connected E neurons and two types of I neurons including Parvalbumin (PV) and Somatostatin (SOM) neurons. The E neurons are modeled as a canonical ring (attractor) model, receiving global inhibition from PV neurons, and locally tuning-dependent inhibition from SOM neurons. We theoretically analyze the nonlinear circuit dynamics and analytically identify the Bayesian sampling algorithm performed by the circuit dynamics. We found a reduced circuit with only E and PV neurons performs Langevin sampling, and the inclusion of SOM neurons with tuning-dependent inhibition speeds up the sampling via upgrading the Langevin into Hamiltonian sampling. Moreover, the Hamiltonian framework requires SOM neurons to receive no direct feedforward connections, consistent with neuroanatomy. Our work provides overarching connections between nonlinear circuits with various types of interneurons and sampling algorithms, deepening our understanding of circuit implementation of Bayesian inference.
The Bayesian sampling in a canonical recurrent circuit with a diversity of inhibitory interneurons
[ "Eryn Sale", "Wenhao Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VNbQbv658b
@inproceedings{ zhang2024covomix, title={CoVoMix: Advancing Zero-Shot Speech Generation for Human-like Multi-talker Conversations}, author={leying zhang and Yao Qian and Long Zhou and Shujie LIU and Dongmei Wang and Xiaofei Wang and Midia Yousefi and Yanmin Qian and Jinyu Li and Lei He and sheng zhao and Michael Zeng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VNbQbv658b} }
Recent advancements in zero-shot text-to-speech (TTS) modeling have led to significant strides in generating high-fidelity and diverse speech. However, dialogue generation, along with achieving human-like naturalness in speech, continues to be a challenge. In this paper, we introduce CoVoMix: Conversational Voice Mixture Generation, a novel model for zero-shot, human-like, multi-speaker, multi-round dialogue speech generation. CoVoMix first converts dialogue text into multiple streams of discrete tokens, with each token stream representing semantic information for individual talkers. These token streams are then fed into a flow-matching based acoustic model to generate mixed mel-spectrograms. Finally, the speech waveforms are produced using a HiFi-GAN model. Furthermore, we devise a comprehensive set of metrics for measuring the effectiveness of dialogue modeling and generation. Our experimental results show that CoVoMix can generate dialogues that are not only human-like in their naturalness and coherence but also involve multiple talkers engaging in multiple rounds of conversation. This is exemplified by instances generated in a single channel where one speaker's utterance is seamlessly mixed with another's interjections or laughter, indicating the latter's role as an attentive listener. Audio samples are enclosed in the supplementary.
CoVoMix: Advancing Zero-Shot Speech Generation for Human-like Multi-talker Conversations
[ "leying zhang", "Yao Qian", "Long Zhou", "Shujie LIU", "Dongmei Wang", "Xiaofei Wang", "Midia Yousefi", "Yanmin Qian", "Jinyu Li", "Lei He", "sheng zhao", "Michael Zeng" ]
NeurIPS.cc/2024/Conference
2404.06690
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VNBIF0gmkb
@inproceedings{ li2024autoregressive, title={Autoregressive Image Generation without Vector Quantization}, author={Tianhong Li and Yonglong Tian and He Li and Mingyang Deng and Kaiming He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VNBIF0gmkb} }
Conventional wisdom holds that autoregressive models for image generation are typically accompanied by vector-quantized tokens. We observe that while a discrete-valued space can facilitate representing a categorical distribution, it is not a necessity for autoregressive modeling. In this work, we propose to model the per-token probability distribution using a diffusion procedure, which allows us to apply autoregressive models in a continuous-valued space. Rather than using categorical cross-entropy loss, we define a Diffusion Loss function to model the per-token probability. This approach eliminates the need for discrete-valued tokenizers. We evaluate its effectiveness across a wide range of cases, including standard autoregressive models and generalized masked autoregressive (MAR) variants. By removing vector quantization, our image generator achieves strong results while enjoying the speed advantage of sequence modeling. We hope this work will motivate the use of autoregressive generation in other continuous-valued domains and applications. Code is available at [https://github.com/LTH14/mar](https://github.com/LTH14/mar).
Autoregressive Image Generation without Vector Quantization
[ "Tianhong Li", "Yonglong Tian", "He Li", "Mingyang Deng", "Kaiming He" ]
NeurIPS.cc/2024/Conference
2406.11838
[ "https://github.com/lth14/mar" ]
https://huggingface.co/papers/2406.11838
2
2
0
5
[ "jadechoghari/mar" ]
[]
[ "jadechoghari/mar" ]
[ "jadechoghari/mar" ]
[]
[ "jadechoghari/mar" ]
1
oral
null
https://openreview.net/forum?id=VMsHnv8cVs
@inproceedings{ ghanem2024learning, title={Learning Better Representations From Less Data For Propositional Satisfiability}, author={Mohamed Ghanem and Frederik Schmitt and Julian Siber and Bernd Finkbeiner}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VMsHnv8cVs} }
Training neural networks on NP-complete problems typically demands very large amounts of training data and often needs to be coupled with computationally expensive symbolic verifiers to ensure output correctness. In this paper, we present NeuRes, a neuro-symbolic approach to address both challenges for propositional satisfiability, being the quintessential NP-complete problem. By combining certificate-driven training and expert iteration, our model learns better representations than models trained for classification only, with a much higher data efficiency -- requiring orders of magnitude less training data. NeuRes employs propositional resolution as a proof system to generate proofs of unsatisfiability and to accelerate the process of finding satisfying truth assignments, exploring both possibilities in parallel. To realize this, we propose an attention-based architecture that autoregressively selects pairs of clauses from a dynamic formula embedding to derive new clauses. Furthermore, we employ expert iteration whereby model-generated proofs progressively replace longer teacher proofs as the new ground truth. This enables our model to reduce a dataset of proofs generated by an advanced solver by $\sim$$32$% after training on it with no extra guidance. This shows that NeuRes is not limited by the optimality of the teacher algorithm owing to its self-improving workflow. We show that our model achieves far better performance than NeuroSAT in terms of both correctly classified and proven instances.
Learning Better Representations From Less Data For Propositional Satisfiability
[ "Mohamed Ghanem", "Frederik Schmitt", "Julian Siber", "Bernd Finkbeiner" ]
NeurIPS.cc/2024/Conference
2402.08365
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=VMiLdBkCJM
@inproceedings{ he2024towards, title={Towards Combating Frequency Simplicity-biased Learning for Domain Generalization}, author={Xilin He and Jingyu Hu and Qinliang Lin and Cheng Luo and Weicheng Xie and Siyang Song and Muhammad Haris Khan and Linlin Shen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VMiLdBkCJM} }
Domain generalization methods aim to learn transferable knowledge from source domains that can generalize well to unseen target domains. Recent studies show that neural networks frequently suffer from a simplicity-biased learning behavior which leads to over-reliance on specific frequency sets, namely as frequency shortcuts, instead of semantic information, resulting in poor generalization performance. Despite previous data augmentation techniques successfully enhancing generalization performances, they intend to apply more frequency shortcuts, thereby causing hallucinations of generalization improvement. In this paper, we aim to prevent such learning behavior of applying frequency shortcuts from a data-driven perspective. Given the theoretical justification of models' biased learning behavior on different spatial frequency components, which is based on the dataset frequency properties, we argue that the learning behavior on various frequency components could be manipulated by changing the dataset statistical structure in the Fourier domain. Intuitively, as frequency shortcuts are hidden in the dominant and highly dependent frequencies of dataset structure, dynamically perturbating the over-reliance frequency components could prevent the application of frequency shortcuts. To this end, we propose two effective data augmentation modules designed to collaboratively and adaptively adjust the frequency characteristic of the dataset, aiming to dynamically influence the learning behavior of the model and ultimately serving as a strategy to mitigate shortcut learning. Our code will be made publicly available.
Towards Combating Frequency Simplicity-biased Learning for Domain Generalization
[ "Xilin He", "Jingyu Hu", "Qinliang Lin", "Cheng Luo", "Weicheng Xie", "Siyang Song", "Muhammad Haris Khan", "Linlin Shen" ]
NeurIPS.cc/2024/Conference
2410.16146
[ "https://github.com/c0notsilly/advfrequency" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VLw8ZyKfcm
@inproceedings{ wang2024latent, title={Latent Neural Operator for Solving Forward and Inverse {PDE} Problems}, author={Tian Wang and Chuang Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VLw8ZyKfcm} }
Neural operators effectively solve PDE problems from data without knowing the explicit equations, which learn the map from the input sequences of observed samples to the predicted values. Most existing works build the model in the original geometric space, leading to high computational costs when the number of sample points is large. We present the Latent Neural Operator (LNO) solving PDEs in the latent space. In particular, we first propose Physics-Cross-Attention (PhCA) transforming representation from the geometric space to the latent space, then learn the operator in the latent space, and finally recover the real-world geometric space via the inverse PhCA map. Our model retains flexibility that can decode values in any position not limited to locations defined in the training set, and therefore can naturally perform interpolation and extrapolation tasks particularly useful for inverse problems. Moreover, the proposed LNO improves both prediction accuracy and computational efficiency. Experiments show that LNO reduces the GPU memory by 50%, speeds up training 1.8 times, and reaches state-of-the-art accuracy on four out of six benchmarks for forward problems and a benchmark for inverse problem. Code is available at https://github.com/L-I-M-I-T/LatentNeuralOperator.
Latent Neural Operator for Solving Forward and Inverse PDE Problems
[ "Tian Wang", "Chuang Wang" ]
NeurIPS.cc/2024/Conference
2406.03923
[ "https://github.com/l-i-m-i-t/latentneuraloperator" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VLQYtVMTYz
@inproceedings{ hofmann2024energybased, title={Energy-based Hopfield Boosting for Out-of-Distribution Detection}, author={Claus Hofmann and Simon Lucas Schmid and Bernhard Lehner and Daniel Klotz and Sepp Hochreiter}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VLQYtVMTYz} }
Out-of-distribution (OOD) detection is critical when deploying machine learning models in the real world. Outlier exposure methods, which incorporate auxiliary outlier data in the training process, can drastically improve OOD detection performance compared to approaches without advanced training strategies. We introduce Hopfield Boosting, a boosting approach, which leverages modern Hopfield energy to sharpen the decision boundary between the in-distribution and OOD data. Hopfield Boosting encourages the model to focus on hard-to-distinguish auxiliary outlier examples that lie close to the decision boundary between in-distribution and auxiliary outlier data. Our method achieves a new state-of-the-art in OOD detection with outlier exposure, improving the FPR95 from 2.28 to 0.92 on CIFAR-10, from 11.76 to 7.94 on CIFAR-100, and from 50.74 to 36.60 on ImageNet-1K.
Energy-based Hopfield Boosting for Out-of-Distribution Detection
[ "Claus Hofmann", "Simon Lucas Schmid", "Bernhard Lehner", "Daniel Klotz", "Sepp Hochreiter" ]
NeurIPS.cc/2024/Conference
2405.08766
[ "https://github.com/ml-jku/hopfield-boosting" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VKt0K3iOmO
@inproceedings{ sun2024spiking, title={Spiking Graph Neural Network on Riemannian Manifolds}, author={Li Sun and Zhenhao Huang and Qiqi Wan and Hao Peng and Philip S. Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VKt0K3iOmO} }
Graph neural networks (GNNs) have become the dominant solution for learning on graphs, the typical non-Euclidean structures. Conventional GNNs, constructed with the Artificial Neuron Network (ANN), have achieved impressive performance at the cost of high computation and energy consumption. In parallel, spiking GNNs with brain-like spiking neurons are drawing increasing research attention owing to the energy efficiency. So far, existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry, and suffer from the high latency issue due to Back-Propagation-Through-Time (BPTT) with the surrogate gradient. In light of the aforementioned issues, we are devoted to exploring spiking GNN on Riemannian manifolds, and present a Manifold-valued Spiking GNN (MSG). In particular, we design a new spiking neuron on geodesically complete manifolds with the diffeomorphism, so that BPTT regarding the spikes is replaced by the proposed differentiation via manifold. Theoretically, we show that MSG approximates a solver of the manifold ordinary differential equation. Extensive experiments on common graphs show the proposed MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.
Spiking Graph Neural Network on Riemannian Manifolds
[ "Li Sun", "Zhenhao Huang", "Qiqi Wan", "Hao Peng", "Philip S. Yu" ]
NeurIPS.cc/2024/Conference
2410.17941
[ "https://github.com/ZhenhHuang/MSG" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VKKY3Uv7vi
@inproceedings{ pan2024bpqp, title={{BPQP}: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning}, author={Jianming Pan and Zeqi Ye and Xiao Yang and Xu Yang and Weiqing Liu and Lewen Wang and Jiang Bian}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VKKY3Uv7vi} }
Data-driven decision-making processes increasingly utilize end-to-end learnable deep neural networks to render final decisions. Sometimes, the output of the forward functions in certain layers is determined by the solutions to mathematical optimization problems, leading to the emergence of differentiable optimization layers that permit gradient back-propagation. However, real-world scenarios often involve large-scale datasets and numerous constraints, presenting significant challenges. Current methods for differentiating optimization problems typically rely on implicit differentiation, which necessitates costly computations on the Jacobian matrices, resulting in low efficiency. In this paper, we introduce BPQP, a differentiable convex optimization framework designed for efficient end-to-end learning. To enhance efficiency, we reformulate the backward pass as a simplified and decoupled quadratic programming problem by leveraging the structural properties of the Karush–Kuhn–Tucker (KKT) matrix. This reformulation enables the use of first-order optimization algorithms in calculating the backward pass gradients, allowing our framework to potentially utilize any state-of-the-art solver. As solver technologies evolve, BPQP can continuously adapt and improve its efficiency. Extensive experiments on both simulated and real-world datasets demonstrate that BPQP achieves a significant improvement in efficiency—typically an order of magnitude faster in overall execution time compared to other differentiable optimization layers. Our results not only highlight the efficiency gains of BPQP but also underscore its superiority over differential optimization layer baselines.
BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning
[ "Jianming Pan", "Zeqi Ye", "Xiao Yang", "Xu Yang", "Weiqing Liu", "Lewen Wang", "Jiang Bian" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=VJMYOfJVC2
@inproceedings{ wang2024wise, title={{WISE}: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models}, author={Peng Wang and Zexi Li and Ningyu Zhang and Ziwen Xu and Yunzhi Yao and Yong Jiang and Pengjun Xie and Fei Huang and Huajun Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VJMYOfJVC2} }
Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses, facilitating the methods of lifelong model editing. Where the updated knowledge resides in memories is a fundamental question for model editing. In this paper, we find that editing either long-term memory (direct model parameters) or working memory (non-parametric knowledge of neural network activations/representations by retrieval) will result in an impossible triangle---reliability, generalization, and locality can not be realized together in the lifelong editing settings. For long-term memory, directly editing the parameters will cause conflicts with irrelevant pretrained knowledge or previous edits (poor reliability and locality). For working memory, retrieval-based activations can hardly make the model understand the edits and generalize (poor generalization). Therefore, we propose WISE to bridge the gap between memories. In WISE, we design a dual parametric memory scheme, which consists of the main memory for the pretrained knowledge and a side memory for the edited knowledge. We only edit the knowledge in the side memory and train a router to decide which memory to go through when given a query. For continual editing, we devise a knowledge-sharding mechanism where different sets of edits reside in distinct subspaces of parameters, and are subsequently merged into a shared memory without conflicts. Extensive experiments show that WISE can outperform previous model editing methods and overcome the impossible triangle under lifelong model editing of question answering, hallucination, and out-of-distribution settings across trending LLM architectures, e.g., GPT, LLaMA, and Mistral.
WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models
[ "Peng Wang", "Zexi Li", "Ningyu Zhang", "Ziwen Xu", "Yunzhi Yao", "Yong Jiang", "Pengjun Xie", "Fei Huang", "Huajun Chen" ]
NeurIPS.cc/2024/Conference
2405.14768
[ "https://github.com/zjunlp/easyedit" ]
https://huggingface.co/papers/2405.14768
1
1
0
9
[]
[]
[ "zjunlp/EasyEdit" ]
[]
[]
[ "zjunlp/EasyEdit" ]
1
poster
null
https://openreview.net/forum?id=VIqQSFNjyP
@inproceedings{ zhao2024connectivitydriven, title={Connectivity-Driven Pseudo-Labeling Makes Stronger Cross-Domain Segmenters}, author={Dong Zhao and Qi Zang and Shuang Wang and Nicu Sebe and Zhun Zhong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VIqQSFNjyP} }
Presently, pseudo-labeling stands as a prevailing approach in cross-domain semantic segmentation, enhancing model efficacy by training with pixels assigned with reliable pseudo-labels. However, we identify two key limitations within this paradigm: (1) under relatively severe domain shifts, most selected reliable pixels appear speckled and remain noisy. (2) when dealing with wild data, some pixels belonging to the open-set class may exhibit high confidence and also appear speckled. These two points make it difficult for the pixel-level selection mechanism to identify and correct these speckled close- and open-set noises. As a result, error accumulation is continuously introduced into subsequent self-training, leading to inefficiencies in pseudo-labeling. To address these limitations, we propose a novel method called Semantic Connectivity-driven Pseudo-labeling (SeCo). SeCo formulates pseudo-labels at the connectivity level, which makes it easier to locate and correct closed and open set noise. Specifically, SeCo comprises two key components: Pixel Semantic Aggregation (PSA) and Semantic Connectivity Correction (SCC). Initially, PSA categorizes semantics into ``stuff'' and ``things'' categories and aggregates speckled pseudo-labels into semantic connectivity through efficient interaction with the Segment Anything Model (SAM). This enables us not only to obtain accurate boundaries but also simplifies noise localization. Subsequently, SCC introduces a simple connectivity classification task, which enables us to locate and correct connectivity noise with the guidance of loss distribution. Extensive experiments demonstrate that SeCo can be flexibly applied to various cross-domain semantic segmentation tasks, \textit{i.e.} domain generalization and domain adaptation, even including source-free, and black-box domain adaptation, significantly improving the performance of existing state-of-the-art methods. The code is provided in the appendix and will be open-source.
Connectivity-Driven Pseudo-Labeling Makes Stronger Cross-Domain Segmenters
[ "Dong Zhao", "Qi Zang", "Shuang Wang", "Nicu Sebe", "Zhun Zhong" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VIlyDguGEz
@inproceedings{ yang2024learning, title={Learning Where to Edit Vision Transformers}, author={Yunqiao Yang and Long-Kai Huang and Shengzhuang Chen and Kede Ma and Ying Wei}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VIlyDguGEz} }
Model editing aims to data-efficiently correct predictive errors of large pre-trained models while ensuring generalization to neighboring failures and locality to minimize unintended effects on unrelated examples. While significant progress has been made in editing Transformer-based large language models, effective strategies for editing vision Transformers (ViTs) in computer vision remain largely untapped. In this paper, we take initial steps towards correcting predictive errors of ViTs, particularly those arising from subpopulation shifts. Taking a locate-then-edit approach, we first address the ``where-to-edit`` challenge by meta-learning a hypernetwork on CutMix-augmented data generated for editing reliability. This trained hypernetwork produces generalizable binary masks that identify a sparse subset of structured model parameters, responsive to real-world failure samples. Afterward, we solve the ``how-to-edit`` problem by simply fine-tuning the identified parameters using a variant of gradient descent to achieve successful edits. To validate our method, we construct an editing benchmark that introduces subpopulation shifts towards natural underrepresented images and AI-generated images, thereby revealing the limitations of pre-trained ViTs for object recognition. Our approach not only achieves superior performance on the proposed benchmark but also allows for adjustable trade-offs between generalization and locality. Our code is available at https://github.com/hustyyq/Where-to-Edit.
Learning Where to Edit Vision Transformers
[ "Yunqiao Yang", "Long-Kai Huang", "Shengzhuang Chen", "Kede Ma", "Ying Wei" ]
NeurIPS.cc/2024/Conference
2411.01948
[ "https://github.com/hustyyq/where-to-edit" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VHva3d836i
@inproceedings{ luo2024wizardarena, title={WizardArena: Post-training Large Language Models via Simulated Offline Chatbot Arena}, author={Haipeng Luo and Qingfeng Sun and Can Xu and Pu Zhao and Qingwei Lin and Jian-Guang Lou and Shifeng Chen and Yansong Tang and Weizhu Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VHva3d836i} }
Recent work demonstrates that, post-training large language models with open-domain instruction following data have achieved colossal success. Simultaneously, human Chatbot Arena has emerged as one of the most reasonable benchmarks for model evaluation and developmental guidance. However, the processes of manually curating high-quality training data and utilizing online human evaluation platforms are both expensive and limited. To mitigate the manual and temporal costs associated with post-training, this paper introduces a Simulated Chatbot Arena named WizardArena, which is fully based on and powered by open-source LLMs. For evaluation scenario, WizardArena can efficiently predict accurate performance rankings among different models based on offline test set. For training scenario, we simulate arena battles among various state-of-the-art models on a large scale of instruction data, subsequently leveraging the battle results to constantly enhance target model in both the supervised fine-tuning and reinforcement learning . Experimental results demonstrate that our WizardArena aligns closely with the online human arena rankings, and our models trained on offline extensive battle data exhibit significant performance improvements during SFT, DPO, and PPO stages.
WizardArena: Post-training Large Language Models via Simulated Offline Chatbot Arena
[ "Haipeng Luo", "Qingfeng Sun", "Can Xu", "Pu Zhao", "Qingwei Lin", "Jian-Guang Lou", "Shifeng Chen", "Yansong Tang", "Weizhu Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VFqzxhINFU
@inproceedings{ zhou2024storydiffusion, title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation}, author={Yupeng Zhou and Daquan Zhou and Ming-Ming Cheng and Jiashi Feng and Qibin Hou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VFqzxhINFU} }
For recent diffusion-based generative models, maintaining consistent content across a series of generated images, especially those containing subjects and complex details, presents a significant challenge. In this paper, we propose a simple but effective self-attention mechanism, termed Consistent Self-Attention, that boosts the consistency between the generated images. It can be used to augment pre-trained diffusion-based text-to-image models in a zero-shot manner. Based on the images with consistent content, we further show that our method can be extended to long range video generation by introducing a semantic space temporal motion prediction module, named Semantic Motion Predictor. It is trained to estimate the motion conditions between two provided images in the semantic spaces. This module converts the generated sequence of images into videos with smooth transitions and consistent subjects that are more stable than the modules based on latent spaces only, especially in the context of long video generation. By merging these two novel components, our framework, referred to as StoryDiffusion, can describe a text-based story with consistent images or videos encompassing a rich variety of contents. The proposed StoryDiffusion encompasses pioneering explorations in visual story generation with the presentation of images and videos, which we hope could inspire more research from the aspect of architectural modifications.
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
[ "Yupeng Zhou", "Daquan Zhou", "Ming-Ming Cheng", "Jiashi Feng", "Qibin Hou" ]
NeurIPS.cc/2024/Conference
2405.01434
[ "https://github.com/hvision-nku/storydiffusion" ]
https://huggingface.co/papers/2405.01434
3
52
3
5
[]
[]
[ "benskibenski/JingleSharkStories", "jasoncharles/StoryDiffusion", "mberke11/content", "shawn642/StoryDiffusion-main", "mberke11/story", "FlexTheAi/Flexstorydiff" ]
[]
[]
[ "benskibenski/JingleSharkStories", "jasoncharles/StoryDiffusion", "mberke11/content", "shawn642/StoryDiffusion-main", "mberke11/story", "FlexTheAi/Flexstorydiff" ]
1
oral
null
https://openreview.net/forum?id=VFpXYBqMSU
@inproceedings{ chen2024slight, title={Slight Corruption in Pre-training Data Makes Better Diffusion Models}, author={Hao Chen and Yujin Han and Diganta Misra and Xiang Li and Kai Hu and Difan Zou and Masashi Sugiyama and Jindong Wang and Bhiksha Raj}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VFpXYBqMSU} }
Diffusion models (DMs) have shown remarkable capabilities in generating realistic high-quality images, audios, and videos. They benefit significantly from extensive pre-training on large-scale datasets, including web-crawled data with paired data and conditions, such as image-text and image-class pairs. Despite rigorous filtering, these pre-training datasets often inevitably contain corrupted pairs where conditions do not accurately describe the data. This paper presents the first comprehensive study on the impact of such corruption in pre-training data of DMs. We synthetically corrupt ImageNet-1K and CC3M to pre-train and evaluate over $50$ conditional DMs. Our empirical findings reveal that various types of slight corruption in pre-training can significantly enhance the quality, diversity, and fidelity of the generated images across different DMs, both during pre-training and downstream adaptation stages. Theoretically, we consider a Gaussian mixture model and prove that slight corruption in the condition leads to higher entropy and a reduced 2-Wasserstein distance to the ground truth of the data distribution generated by the corruptly trained DMs. Inspired by our analysis, we propose a simple method to improve the training of DMs on practical datasets by adding condition embedding perturbations (CEP). CEP significantly improves the performance of various DMs in both pre-training and downstream tasks. We hope that our study provides new insights into understanding the data and pre-training processes of DMs.
Slight Corruption in Pre-training Data Makes Better Diffusion Models
[ "Hao Chen", "Yujin Han", "Diganta Misra", "Xiang Li", "Kai Hu", "Difan Zou", "Masashi Sugiyama", "Jindong Wang", "Bhiksha Raj" ]
NeurIPS.cc/2024/Conference
2405.20494
[ "" ]
https://huggingface.co/papers/2405.20494
0
0
0
9
[ "panopstor/condition_embedding_perturbation" ]
[]
[]
[ "panopstor/condition_embedding_perturbation" ]
[]
[]
1
oral
null
https://openreview.net/forum?id=VFRyS7Wx08
@inproceedings{ zhou2024rethinking, title={Rethinking Inverse Reinforcement Learning: from Data Alignment to Task Alignment}, author={Weichao Zhou and Wenchao Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VFRyS7Wx08} }
Many imitation learning (IL) algorithms use inverse reinforcement learning (IRL) to infer a reward function that aligns with the demonstration. However, the inferred reward functions often fail to capture the underlying task objectives. In this paper, we propose a novel framework for IRL-based IL that prioritizes task alignment over conventional data alignment. Our framework is a semi-supervised approach that leverages expert demonstrations as weak supervision to derive a set of candidate reward functions that align with the task rather than only with the data. It then adopts an adversarial mechanism to train a policy with this set of reward functions to gain a collective validation of the policy's ability to accomplish the task. We provide theoretical insights into this framework's ability to mitigate task-reward misalignment and present a practical implementation. Our experimental results show that our framework outperforms conventional IL baselines in complex and transfer learning scenarios.
Rethinking Inverse Reinforcement Learning: from Data Alignment to Task Alignment
[ "Weichao Zhou", "Wenchao Li" ]
NeurIPS.cc/2024/Conference
2410.23680
[ "https://github.com/zwc662/PAGAR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VDPZe0NbpE
@inproceedings{ zimmert2024productive, title={{PROD}uctive bandits: Importance Weighting No More}, author={Julian Zimmert and Teodor Vanislavov Marinov}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=VDPZe0NbpE} }
Prod is a seminal algorithm in full-information online learning, which has been conjectured to be fundamentally sub-optimal for multi-armed bandits. By leveraging the interpretation of Prod as a first-order OMD approximation, we present the following surprising results: 1. Variants of Prod can obtain optimal regret for adversarial multi-armed bandits. 2. There exists a simple and (arguably) importance-weighting free variant with optimal rate. 3. One can even achieve best-both-worlds guarantees with logarithmic regret in the stochastic regime. The bandit algorithms in this work use simple arithmetic update rules without the need of solving optimization problems typical in prior work. Finally, the results directly improve the state of the art of incentive-compatible bandits.
PRODuctive bandits: Importance Weighting No More
[ "Julian Zimmert", "Teodor Vanislavov Marinov" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V8HVsyTSu6
@inproceedings{ wang2024outlierrobust, title={Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport}, author={Zifan Wang and Yi Shen and Michael M. Zavlanos and Karl Henrik Johansson}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V8HVsyTSu6} }
Distributionally Robust Optimization (DRO) accounts for uncertainty in data distributions by optimizing the model performance against the worst possible distribution within an ambiguity set. In this paper, we propose a DRO framework that relies on a new distance inspired by Unbalanced Optimal Transport (UOT). The proposed UOT distance employs a soft penalization term instead of hard constraints, enabling the construction of an ambiguity set that is more resilient to outliers. Under smoothness conditions, we establish strong duality of the proposed DRO problem. Moreover, we introduce a computationally efficient Lagrangian penalty formulation for which we show that strong duality also holds. Finally, we provide empirical results that demonstrate that our method offers improved robustness to outliers and is computationally less demanding for regression and classification tasks.
Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport
[ "Zifan Wang", "Yi Shen", "Michael M. Zavlanos", "Karl Henrik Johansson" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V75gAxpW40
@inproceedings{ xie2024gradientvariation, title={Gradient-Variation Online Learning under Generalized Smoothness}, author={Yan-Feng Xie and Peng Zhao and Zhi-Hua Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V75gAxpW40} }
Gradient-variation online learning aims to achieve regret guarantees that scale with variations in the gradients of online functions, which is crucial for attaining fast convergence in games and robustness in stochastic optimization, hence receiving increased attention. Existing results often require the smoothness condition by imposing a fixed bound on gradient Lipschitzness, which may be unrealistic in practice. Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms. In this paper, we systematically study gradient-variation online learning under generalized smoothness. We extend the classic optimistic mirror descent algorithm to derive gradient-variation regret by analyzing stability over the optimization trajectory and exploiting smoothness locally. Then, we explore universal online learning, designing a single algorithm with the optimal gradient-variation regrets for convex and strongly convex functions simultaneously, without requiring prior knowledge of curvature. This algorithm adopts a two-layer structure with a meta-algorithm running over a group of base-learners. To ensure favorable guarantees, we design a new Lipschitz-adaptive meta-algorithm, capable of handling potentially unbounded gradients while ensuring a second-order bound to effectively ensemble the base-learners. Finally, we provide the applications for fast-rate convergence in games and stochastic extended adversarial optimization.
Gradient-Variation Online Learning under Generalized Smoothness
[ "Yan-Feng Xie", "Peng Zhao", "Zhi-Hua Zhou" ]
NeurIPS.cc/2024/Conference
2408.09074
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V6w7keoTqn
@inproceedings{ qiu2024emvp, title={{EMVP}: Embracing Visual Foundation Model for Visual Place Recognition with Centroid-Free Probing}, author={Qibo Qiu and Shun Zhang and Haiming Gao and Honghui Yang and Haochao Ying and Wenxiao Wang and Xiaofei He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V6w7keoTqn} }
Visual Place Recognition (VPR) is essential for mobile robots as it enables them to retrieve images from a database closest to their current location. The progress of Visual Foundation Models (VFMs) has significantly advanced VPR by capturing representative descriptors in images. However, existing fine-tuning efforts for VFMs often overlook the crucial role of probing in effectively adapting these descriptors for improved image representation. In this paper, we propose the Centroid-Free Probing (CFP) stage, making novel use of second-order features for more effective use of descriptors from VFMs. Moreover, to control the preservation of task-specific information adaptively based on the context of the VPR, we introduce the Dynamic Power Normalization (DPN) module in both the recalibration and CFP stages, forming a novel Parameter Efficiency Fine-Tuning (PEFT) pipeline (EMVP) tailored for the VPR task. Extensive experiments demonstrate the superiority of the proposed CFP over existing probing methods. Moreover, the EMVP pipeline can further enhance fine-tuning performance in terms of accuracy and efficiency. Specifically, it achieves 93.9\%, 96.5\%, and 94.6\% Recall@1 on the MSLS Validation, Pitts250k-test, and SPED datasets, respectively, while saving 64.3\% of trainable parameters compared with the existing SOTA PEFT method.
EMVP: Embracing Visual Foundation Model for Visual Place Recognition with Centroid-Free Probing
[ "Qibo Qiu", "Shun Zhang", "Haiming Gao", "Honghui Yang", "Haochao Ying", "Wenxiao Wang", "Xiaofei He" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V6qdb1AgsM
@inproceedings{ andersson2024continual, title={Continual Counting with Gradual Privacy Expiration}, author={Joel Daniel Andersson and Monika Henzinger and Rasmus Pagh and Teresa Anna Steiner and Jalaj Upadhyay}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V6qdb1AgsM} }
Differential privacy with gradual expiration models the setting where data items arrive in a stream and at a given time $t$ the privacy loss guaranteed for a data item seen at time $(t-d)$ is $\epsilon g(d)$, where $g$ is a monotonically non-decreasing function. We study the fundamental *continual (binary) counting* problem where each data item consists of a bit and the algorithm needs to output at each time step the sum of all the bits streamed so far. For a stream of length $T$ and privacy *without* expiration continual counting is possible with maximum (over all time steps) additive error $O(\log^2(T)/\varepsilon)$ and the best known lower bound is $\Omega(\log(T)/\varepsilon)$; closing this gap is a challenging open problem. We show that the situation is very different for privacy with gradual expiration by giving upper and lower bounds for a large set of expiration functions $g$. Specifically, our algorithm achieves an additive error of $O(\log(T)/\epsilon)$ for a large set of privacy expiration functions. We also give a lower bound that shows that if $C$ is the additive error of any $\epsilon$-DP algorithm for this problem, then the product of $C$ and the privacy expiration function after $2C$ steps must be $\Omega(\log(T)/\epsilon)$. Our algorithm matches this lower bound as its additive error is $O(\log(T)/\epsilon)$, even when $g(2C) = O(1)$. Our empirical evaluation shows that we achieve a slowly growing privacy loss that has significantly smaller empirical privacy loss for large values of $d$ than a natural baseline algorithm.
Continual Counting with Gradual Privacy Expiration
[ "Joel Daniel Andersson", "Monika Henzinger", "Rasmus Pagh", "Teresa Anna Steiner", "Jalaj Upadhyay" ]
NeurIPS.cc/2024/Conference
2406.03802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V6hrg4O9gg
@inproceedings{ tehranijamsaz2024coderosetta, title={CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming}, author={Ali TehraniJamsaz and Arijit Bhattacharjee and Le Chen and Nesreen K. Ahmed and Amir Yazdanbakhsh and Ali Jannesari}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V6hrg4O9gg} }
Automatic translation of programming languages has garnered renewed interest, driven by recent advancements in large language models (LLMs). Encoder-decoder transformer models, in particular, have shown promise in translating between different programming languages. However, translating between a language and its high-performance computing (HPC) extension remains underexplored due to inherent challenges like complex parallel semantics understanding. In this paper, we introduce CodeRosetta, an encoder-decoder transformer model explicitly designed for translating between programming languages and also their HPC extensions. CodeRosetta is evaluated on C++ to CUDA and Fortran to C++ translation. It employs a customized learning-based framework with tailored pretraining and training objectives that enable it to effectively capture code semantics and parallel structural nuances, allowing for bidirectional code translation. Our results show that CodeRosetta outperforms state-of-the-art baselines in C++ to CUDA translation by 2.9 BLEU and 1.72 CodeBLUE points while improving compilation accuracy by 6.05%. Compared to general closed-source LLMs, our proposed bidirectional learning-based method improves C++ to CUDA translation by 22.08 BLEU and 14.39 CodeBLUE with 2.75% higher compilation accuracy. Finally, CodeRosetta exhibits proficiency in Fortran to parallel C++ translation, marking it, to our knowledge, as the first encoder-decoder model for such a complex translation task, improving CodeBLEU at least by 4.63 points compared to closed-source LLMs and Open Code LLM.
CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
[ "Ali TehraniJamsaz", "Arijit Bhattacharjee", "Le Chen", "Nesreen K. Ahmed", "Amir Yazdanbakhsh", "Ali Jannesari" ]
NeurIPS.cc/2024/Conference
2410.20527
[ "" ]
https://huggingface.co/papers/2410.20527
1
0
0
6
[ "CodeRosetta/CodeRosetta_cpp2cuda_ft", "CodeRosetta/CodeRosetta_cpp_cuda_base", "CodeRosetta/CodeRosetta_cpp_fortran_base", "CodeRosetta/CodeRosetta_fortran2cpp_ft" ]
[]
[]
[ "CodeRosetta/CodeRosetta_cpp2cuda_ft", "CodeRosetta/CodeRosetta_cpp_cuda_base", "CodeRosetta/CodeRosetta_cpp_fortran_base", "CodeRosetta/CodeRosetta_fortran2cpp_ft" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=V4tzn87DtN
@inproceedings{ jiang2024stochastic, title={Stochastic Newton Proximal Extragradient Method}, author={Ruichen Jiang and Michal Derezinski and Aryan Mokhtari}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V4tzn87DtN} }
Stochastic second-order methods are known to achieve fast local convergence in strongly convex optimization by relying on noisy Hessian estimates to precondition the gradient. Yet, most of these methods achieve superlinear convergence only when the stochastic Hessian noise diminishes, requiring an increase in the per-iteration cost as time progresses. Recent work in \cite{na2022hessian} addressed this issue via a Hessian averaging scheme that achieves a superlinear convergence rate without increasing the per-iteration cost. However, the considered method exhibits a slow global convergence rate, requiring up to $\tilde{\mathcal{O}}(\kappa^2)$ iterations to reach the superlinear rate of $\tilde{\mathcal{O}}((1/t)^{t/2})$, where $\kappa$ is the problem's condition number. In this paper, we propose a novel stochastic Newton proximal extragradient method that significantly improves these bounds, achieving a faster global linear rate and reaching the same fast superlinear rate in $\tilde{\mathcal{O}}(\kappa)$ iterations. We achieve this by developing a novel extension of the Hybrid Proximal Extragradient (HPE) framework, which simultaneously achieves fast global and local convergence rates for strongly convex functions with access to a noisy Hessian oracle.
Stochastic Newton Proximal Extragradient Method
[ "Ruichen Jiang", "Michal Derezinski", "Aryan Mokhtari" ]
NeurIPS.cc/2024/Conference
2406.01478
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster