bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=V42zfM2GXw
@inproceedings{ chen2024decrl, title={{DECRL}: A Deep Evolutionary Clustering Jointed Temporal Knowledge Graph Representation Learning Approach}, author={Qian Chen and Ling Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V42zfM2GXw} }
Temporal Knowledge Graph (TKG) representation learning aims to map temporal evolving entities and relations to embedded representations in a continuous low-dimensional vector space. However, existing approaches cannot capture the temporal evolution of high-order correlations in TKGs. To this end, we propose a **D**eep **E**volutionary **C**lustering jointed temporal knowledge graph **R**epresentation **L**earning approach (**DECRL**). Specifically, a deep evolutionary clustering module is proposed to capture the temporal evolution of high-order correlations among entities. Furthermore, a cluster-aware unsupervised alignment mechanism is introduced to ensure the precise one-to-one alignment of soft overlapping clusters across timestamps, thereby maintaining the temporal smoothness of clusters. In addition, an implicit correlation encoder is introduced to capture latent correlations between any pair of clusters under the guidance of a global graph. Extensive experiments on seven real-world datasets demonstrate that DECRL achieves the state-of-the-art performances, outperforming the best baseline by an average of 9.53\%, 12.98\%, 10.42\%, and 14.68\% in MRR, Hits@1, Hits@3, and Hits@10, respectively.
DECRL: A Deep Evolutionary Clustering Jointed Temporal Knowledge Graph Representation Learning Approach
[ "Qian Chen", "Ling Chen" ]
NeurIPS.cc/2024/Conference
2410.22631
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V3QZCM1AQv
@inproceedings{ tseng2024reborn, title={{REBORN}: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised {ASR}}, author={Liang-Hsuan Tseng and En-Pei Hu and Cheng-Han Chiang and Yuan Tseng and Hung-yi Lee and Lin-shan Lee and Shao-Hua Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V3QZCM1AQv} }
Unsupervised automatic speech recognition (ASR) aims to learn the mapping between the speech signal and its corresponding textual transcription without the supervision of paired speech-text data. A word/phoneme in the speech signal is represented by a segment of speech signal with variable length and unknown boundary, and this segmental structure makes learning the mapping between speech and text challenging, especially without paired data. In this paper, we propose REBORN, Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR. REBORN alternates between (1) training a segmentation model that predicts the boundaries of the segmental structures in speech signals and (2) training the phoneme prediction model, whose input is a segmental structure segmented by the segmentation model, to predict a phoneme transcription. Since supervised data for training the segmentation model is not available, we use reinforcement learning to train the segmentation model to favor segmentations that yield phoneme sequence predictions with a lower perplexity. We conduct extensive experiments and find that under the same setting, REBORN outperforms all prior unsupervised ASR models on LibriSpeech, TIMIT, and five non-English languages in Multilingual LibriSpeech. We comprehensively analyze why the boundaries learned by REBORN improve the unsupervised ASR performance.
REBORN: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR
[ "Liang-Hsuan Tseng", "En-Pei Hu", "Cheng-Han Chiang", "Yuan Tseng", "Hung-yi Lee", "Lin-shan Lee", "Shao-Hua Sun" ]
NeurIPS.cc/2024/Conference
2402.03988
[ "https://github.com/andybi7676/reborn-uasr" ]
https://huggingface.co/papers/2402.03988
1
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=V2e0A2XIPF
@inproceedings{ xu2024qtvit, title={{QT}-ViT: Improving Linear Attention in ViT with Quadratic Taylor Expansion}, author={Yixing Xu and Chao Li and Dong Li and Xiao Sheng and Fan Jiang and Lu Tian and Emad Barsoum}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V2e0A2XIPF} }
Vision transformer model (ViT) is widely used and performs well in vision tasks due to its ability to capture long-range dependencies. However, the time complexity and memory consumption increase quadratically with the number of input patches which limits the usage of ViT in real-world applications. Previous methods have employed linear attention to mitigate the complexity of the original self-attention mechanism at the expense of effectiveness. In this paper, we propose QT-ViT models that improve the previous linear self-attention using quadratic Taylor expansion. Specifically, we substitute the softmax-based attention with second-order Taylor expansion, and then accelerate the quadratic expansion by reducing the time complexity with a fast approximation algorithm. The proposed method capitalizes on the property of quadratic expansion to achieve superior performance while employing linear approximation for fast inference. Compared to previous studies of linear attention, our approach does not necessitate knowledge distillation or high-order attention residuals to facilitate the training process. Extensive experiments demonstrate the efficiency and effectiveness of the proposed QT-ViTs, showcasing the state-of-the-art results. Particularly, the proposed QT-ViTs consistently surpass the previous SOTA EfficientViTs under different model sizes, and achieve a new Pareto-front in terms of accuracy and speed.
QT-ViT: Improving Linear Attention in ViT with Quadratic Taylor Expansion
[ "Yixing Xu", "Chao Li", "Dong Li", "Xiao Sheng", "Fan Jiang", "Lu Tian", "Emad Barsoum" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V2MBWYXp63
@inproceedings{ luo2024textnkg, title={Text2{NKG}: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction}, author={Haoran Luo and Haihong E and Yuhao Yang and Tianyu Yao and Yikai Guo and Zichen Tang and Wentai Zhang and Shiyao Peng and Kaiyang Wan and Meina Song and Wei Lin and Yifan Zhu and Anh Tuan Luu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V2MBWYXp63} }
Beyond traditional binary relational facts, n-ary relational knowledge graphs (NKGs) are comprised of n-ary relational facts containing more than two entities, which are closer to real-world facts with broader applications. However, the construction of NKGs remains at a coarse-grained level, which is always in a single schema, ignoring the order and variable arity of entities. To address these restrictions, we propose Text2NKG, a novel fine-grained n-ary relation extraction framework for n-ary relational knowledge graph construction. We introduce a span-tuple classification approach with hetero-ordered merging and output merging to accomplish fine-grained n-ary relation extraction in different arity. Furthermore, Text2NKG supports four typical NKG schemas: hyper-relational schema, event-based schema, role-based schema, and hypergraph-based schema, with high flexibility and practicality. The experimental results demonstrate that Text2NKG achieves state-of-the-art performance in F1 scores on the fine-grained n-ary relation extraction benchmark. Our code and datasets are publicly available.
Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction
[ "Haoran Luo", "Haihong E", "Yuhao Yang", "Tianyu Yao", "Yikai Guo", "Zichen Tang", "Wentai Zhang", "Shiyao Peng", "Kaiyang Wan", "Meina Song", "Wei Lin", "Yifan Zhu", "Anh Tuan Luu" ]
NeurIPS.cc/2024/Conference
2310.05185
[ "https://github.com/lhrlab/text2nkg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V0oJaLqY4E
@inproceedings{ yoon2024maximum, title={Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models}, author={Sangwoong Yoon and Himchan Hwang and Dohyun Kwon and Yung-Kyun Noh and Frank C. Park}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V0oJaLqY4E} }
We present a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when the number of generation time steps is small. Similar to how IRL trains a policy based on the reward function learned from expert demonstrations, we train (or fine-tune) a diffusion model using the log probability density estimated from training data. Since we employ an energy-based model (EBM) to represent the log density, our approach boils down to the joint training of a diffusion model and an EBM. Our IRL formulation, named Diffusion by Maximum Entropy IRL (DxMI), is a minimax problem that reaches equilibrium when both models converge to the data distribution. The entropy maximization plays a key role in DxMI, facilitating the exploration of the diffusion model and ensuring the convergence of the EBM. We also propose Diffusion by Dynamic Programming (DxDP), a novel reinforcement learning algorithm for diffusion models, as a subroutine in DxMI. DxDP makes the diffusion model update in DxMI efficient by transforming the original problem into an optimal control formulation where value functions replace back-propagation in time. Our empirical studies show that diffusion models fine-tuned using DxMI can generate high-quality samples in as few as 4 and 10 steps. Additionally, DxMI enables the training of an EBM without MCMC, stabilizing EBM training dynamics and enhancing anomaly detection performance.
Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models
[ "Sangwoong Yoon", "Himchan Hwang", "Dohyun Kwon", "Yung-Kyun Noh", "Frank C. Park" ]
NeurIPS.cc/2024/Conference
2407.00626
[ "https://github.com/swyoon/diffusion-by-maxentirl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=V0JvwCQlJe
@inproceedings{ kose2024fairwire, title={FairWire: Fair Graph Generation}, author={Oyku Deniz Kose and Yanning Shen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=V0JvwCQlJe} }
Machine learning over graphs has recently attracted growing attention due to its ability to analyze and learn complex relations within critical interconnected systems. However, the disparate impact that is amplified by the use of biased graph structures in these algorithms has raised significant concerns for their deployment in real-world decision systems. In addition, while synthetic graph generation has become pivotal for privacy and scalability considerations, the impact of generative learning algorithms on structural bias has not yet been investigated. Motivated by this, this work focuses on the analysis and mitigation of structural bias for both real and synthetic graphs. Specifically, we first theoretically analyze the sources of structural bias that result in disparity for the predictions of dyadic relations. To alleviate the identified bias factors, we design a novel fairness regularizer that offers a versatile use. Faced with the bias amplification in graph generation models brought to light in this work, we further propose a fair graph generation framework, FairWire, by leveraging our fair regularizer design in a generative model. Experimental results on real-world networks validate that the proposed tools herein deliver effective structural bias mitigation for both real and synthetic graphs.
FairWire: Fair Graph Generation
[ "Oyku Deniz Kose", "Yanning Shen" ]
NeurIPS.cc/2024/Conference
2402.04383
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Uz804qLJT2
@inproceedings{ tiberi2024dissecting, title={Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers}, author={Lorenzo Tiberi and Francesca Mignacco and Kazuki Irie and Haim Sompolinsky}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Uz804qLJT2} }
Despite the remarkable empirical performance of Transformers, their theoretical understanding remains elusive. Here, we consider a deep multi-head self-attention network, that is closely related to Transformers yet analytically tractable. We develop a statistical mechanics theory of Bayesian learning in this model, deriving exact equations for the network's predictor statistics under the finite-width thermodynamic limit, i.e., $N,P\rightarrow\infty$, $P/N=\mathcal{O}(1)$, where $N$ is the network width and $P$ is the number of training examples. Our theory shows that the predictor statistics are expressed as a sum of independent kernels, each one pairing different "attention paths", defined as information pathways through different attention heads across layers. The kernels are weighted according to a "task-relevant kernel combination" mechanism that aligns the total kernel with the task labels. As a consequence, this interplay between attention paths enhances generalization performance. Experiments confirm our findings on both synthetic and real-world sequence classification tasks. Finally, our theory explicitly relates the kernel combination mechanism to properties of the learned weights, allowing for a qualitative transfer of its insights to models trained via gradient descent. As an illustration, we demonstrate an efficient size reduction of the network, by pruning those attention heads that are deemed less relevant by our theory.
Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers
[ "Lorenzo Tiberi", "Francesca Mignacco", "Kazuki Irie", "Haim Sompolinsky" ]
NeurIPS.cc/2024/Conference
2405.15926
[ "https://github.com/tiberilor/attention-paths-interplay" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Uymv9ThB50
@inproceedings{ xu2024uncovering, title={Uncovering Safety Risks of Large Language Models through Concept Activation Vector}, author={Zhihao Xu and Ruixuan HUANG and Changyu Chen and Xiting Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Uymv9ThB50} }
Despite careful safety alignment, current large language models (LLMs) remain vulnerable to various attacks. To further unveil the safety risks of LLMs, we introduce a Safety Concept Activation Vector (SCAV) framework, which effectively guides the attacks by accurately interpreting LLMs' safety mechanisms. We then develop an SCAV-guided attack method that can generate both attack prompts and embedding-level attacks with automatically selected perturbation hyperparameters. Both automatic and human evaluations demonstrate that our attack method significantly improves the attack success rate and response quality while requiring less training data. Additionally, we find that our generated attack prompts may be transferable to GPT-4, and the embedding-level attacks may also be transferred to other white-box LLMs whose parameters are known. Our experiments further uncover the safety risks present in current LLMs. For example, in our evaluation of seven open-source LLMs, we observe an average attack success rate of 99.14%, based on the classic keyword-matching criterion. Finally, we provide insights into the safety mechanism of LLMs. The code is available at https://github.com/SproutNan/AI-Safety_SCAV.
Uncovering Safety Risks of Large Language Models through Concept Activation Vector
[ "Zhihao Xu", "Ruixuan HUANG", "Changyu Chen", "Xiting Wang" ]
NeurIPS.cc/2024/Conference
2404.12038
[ "https://github.com/sproutnan/ai-safety_scav" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UwvjJZWjPT
@inproceedings{ lippl2024inductive, title={Inductive biases of multi-task learning and finetuning: multiple regimes of feature reuse}, author={Samuel Lippl and Jack Lindsey}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UwvjJZWjPT} }
Neural networks are often trained on multiple tasks, either simultaneously (multi-task learning, MTL) or sequentially (pretraining and subsequent finetuning, PT+FT). In particular, it is common practice to pretrain neural networks on a large auxiliary task before finetuning on a downstream task with fewer samples. Despite the prevalence of this approach, the inductive biases that arise from learning multiple tasks are poorly characterized. In this work, we address this gap. We describe novel implicit regularization penalties associated with MTL and PT+FT in diagonal linear networks and single-hidden-layer ReLU networks. These penalties indicate that MTL and PT+FT induce the network to reuse features in different ways. 1) Both MTL and PT+FT exhibit biases towards feature reuse between tasks, and towards sparsity in the set of learned features. We show a "conservation law" that implies a direct tradeoff between these two biases. 2) PT+FT exhibits a novel "nested feature selection" regime, not described by either the "lazy" or "rich" regimes identified in prior work, which biases it to *rely on a sparse subset* of the features learned during pretraining. This regime is much narrower for MTL. 3) PT+FT (but not MTL) in ReLU networks benefits from features that are correlated between the auxiliary and main task. We confirm these findings empirically with teacher-student models, and introduce a technique -- weight rescaling following pretraining -- that can elicit the nested feature selection regime. Finally, we validate our theory in deep neural networks trained on image classification. We find that weight rescaling improves performance when it causes models to display signatures of nested feature selection. Our results suggest that nested feature selection may be an important inductive bias for finetuning neural networks.
Inductive biases of multi-task learning and finetuning: multiple regimes of feature reuse
[ "Samuel Lippl", "Jack Lindsey" ]
NeurIPS.cc/2024/Conference
2310.02396
[ "https://github.com/sflippl/multi-task" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Uw2eJOI822
@inproceedings{ huang2024renovating, title={Renovating Names in Open-Vocabulary Segmentation Benchmarks}, author={Haiwen Huang and Songyou Peng and Dan Zhang and Andreas Geiger}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Uw2eJOI822} }
Names are essential to both human cognition and vision-language models. Open-vocabulary models utilize class names as text prompts to generalize to categories unseen during training. However, the precision of these names is often overlooked in existing datasets. In this paper, we address this underexplored problem by presenting a framework for "renovating" names in open-vocabulary segmentation benchmarks (RENOVATE). Our framework features a renaming model that enhances the quality of names for each visual segment. Through experiments, we demonstrate that our renovated names help train stronger open-vocabulary models with up to 15% relative improvement and significantly enhance training efficiency with improved data quality. We also show that our renovated names improve evaluation by better measuring misclassification and enabling fine-grained model analysis. We provide our code and relabelings for several popular segmentation datasets to the research community on our project page: https://andrehuang.github.io/renovate.
Renovating Names in Open-Vocabulary Segmentation Benchmarks
[ "Haiwen Huang", "Songyou Peng", "Dan Zhang", "Andreas Geiger" ]
NeurIPS.cc/2024/Conference
2403.09593
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UvbpbEhGaw
@inproceedings{ fr{\"a}nken2024selfsupervised, title={Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels}, author={Jan-Philipp Fr{\"a}nken and Eric Zelikman and Rafael Rafailov and Kanishk Gandhi and Tobias Gerstenberg and Noah Goodman}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UvbpbEhGaw} }
When prompting a language model (LM), users often expect the model to adhere to a set of behavioral principles across diverse tasks, such as producing insightful content while avoiding harmful or biased language. Instilling such principles (i.e., a constitution) into a model is resource-intensive, technically challenging, and generally requires human preference labels or examples. We introduce SAMI, an iterative algorithm that finetunes a pretrained language model (without requiring preference labels or demonstrations) to increase the conditional mutual information between constitutions and self-generated responses given queries from a dataset. On single-turn dialogue and summarization, a SAMI-trained mistral-7b outperforms the initial pretrained model, with win rates between 66% and 77%. Strikingly, it also surpasses an instruction-finetuned baseline (mistral-7b-instruct) with win rates between 55% and 57% on single-turn dialogue. SAMI requires a model that writes the principles. To avoid dependence on strong models for writing principles, we align a strong pretrained model (mixtral-8x7b) using constitutions written by a weak instruction-finetuned model (mistral-7b-instruct), achieving a 65% win rate on summarization. Finally, we investigate whether SAMI generalizes to diverse summarization principles (e.g., "summaries should be scientific") and scales to stronger models (llama3-70b), finding that it achieves win rates of up to 68% for learned and 67% for held-out principles compared to the base model. Our results show that a pretrained LM can learn to follow constitutions without using preference labels, demonstrations, or human oversight.
Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels
[ "Jan-Philipp Fränken", "Eric Zelikman", "Rafael Rafailov", "Kanishk Gandhi", "Tobias Gerstenberg", "Noah Goodman" ]
NeurIPS.cc/2024/Conference
2404.14313
[ "https://github.com/janphilippfranken/sami" ]
https://huggingface.co/papers/2404.14313
4
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=UuiZEOVtHx
@inproceedings{ zhang2024safe, title={Safe and Efficient: A Primal-Dual Method for Offline Convex {CMDP}s under Partial Data Coverage}, author={Haobo Zhang and Xiyue Peng and Honghao Wei and Xin Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UuiZEOVtHx} }
Offline safe reinforcement learning (RL) aims to find an optimal policy using a pre-collected dataset when data collection is impractical or risky. We propose a novel linear programming (LP) based primal-dual algorithm for convex MDPs that incorporates ``uncertainty'' parameters to improve data efficiency while requiring only partial data coverage assumption. Our theoretical results achieve a sample complexity of $\mathcal{O}(1/(1-\gamma)\sqrt{n})$ under general function approximation, improving the current state-of-the-art by a factor of $1/(1-\gamma)$, where $n$ is the number of data samples in an offline dataset, and $\gamma$ is the discount factor. The numerical experiments validate our theoretical findings, demonstrating the practical efficacy of our approach in achieving improved safety and learning efficiency in safe offline settings.
Safe and Efficient: A Primal-Dual Method for Offline Convex CMDPs under Partial Data Coverage
[ "Haobo Zhang", "Xiyue Peng", "Honghao Wei", "Xin Liu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UtbjD5LGnC
@inproceedings{ taturyan2024regression, title={Regression under demographic parity constraints via unlabeled post-processing}, author={Gayane Taturyan and Evgenii Chzhen and Mohamed Hebiri}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UtbjD5LGnC} }
We address the problem of performing regression while ensuring demographic parity, even without access to sensitive attributes during inference. We present a general-purpose post-processing algorithm that, using accurate estimates of the regression function and a sensitive attribute predictor, generates predictions that meet the demographic parity constraint. Our method involves discretization and stochastic minimization of a smooth convex function. It is suitable for online post-processing and multi-class classification tasks only involving unlabeled data for the post-processing. Unlike prior methods, our approach is fully theory-driven. We require precise control over the gradient norm of the convex function, and thus, we rely on more advanced techniques than standard stochastic gradient descent. Our algorithm is backed by finite-sample analysis and post-processing bounds, with experimental results validating our theoretical findings.
Regression under demographic parity constraints via unlabeled post-processing
[ "Gayane Taturyan", "Evgenii Chzhen", "Mohamed Hebiri" ]
NeurIPS.cc/2024/Conference
2407.15453
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UtTjgMDTFO
@inproceedings{ dyer2024interventionally, title={Interventionally Consistent Surrogates for Complex Simulation Models}, author={Joel Dyer and Nicholas George Bishop and Yorgos Felekis and Fabio Massimo Zennaro and Ani Calinescu and Theodoros Damoulas and Michael J. Wooldridge}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UtTjgMDTFO} }
Large-scale simulation models of complex socio-technical systems provide decision-makers with high-fidelity testbeds in which policy interventions can be evaluated and _what-if_ scenarios explored. Unfortunately, the high computational cost of such models inhibits their widespread use in policy-making settings. Surrogate models can address these computational limitations, but to do so they must behave consistently with the simulator under interventions of interest. In this paper, we build upon recent developments in causal abstractions to develop a framework for learning interventionally consistent surrogate models for large-scale, complex simulation models. We provide theoretical results showing that our proposed approach induces surrogates to behave consistently with high probability with respect to the simulator across interventions of interest, facilitating rapid experimentation with policy interventions in complex systems. We further demonstrate with empirical studies that conventionally trained surrogates can misjudge the effect of interventions and misguide decision-makers towards suboptimal interventions, while surrogates trained for _interventional_ consistency with our method closely mimic the behaviour of the original simulator under interventions of interest.
Interventionally Consistent Surrogates for Complex Simulation Models
[ "Joel Dyer", "Nicholas George Bishop", "Yorgos Felekis", "Fabio Massimo Zennaro", "Ani Calinescu", "Theodoros Damoulas", "Michael J. Wooldridge" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ur9f4hNIpN
@inproceedings{ li2024predictorcorrector, title={Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning}, author={Bei Li and Tong Zheng and Rui Wang and Jiahao Liu and Qingyan Guo and Junliang Guo and Xu Tan and Tong Xiao and JingBo Zhu and Jingang Wang and Xunliang Cai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Ur9f4hNIpN} }
Residual networks, as discrete approximations of Ordinary Differential Equations (ODEs), have inspired significant advancements in neural network design, including multistep methods, high-order methods, and multi-particle dynamical systems. The precision of the solution to ODEs significantly affects parameter optimization, thereby impacting model performance. In this work, we present a series of advanced explorations of Transformer architecture design to minimize the error compared to the true ``solution.'' First, we introduce a predictor-corrector learning framework to minimize truncation errors, which consists of a high-order predictor and a multistep corrector. Second, we propose an exponential moving average-based coefficient learning method to strengthen our higher-order predictor. Extensive experiments on large-scale machine translation, abstractive summarization, language modeling, and natural language understanding benchmarks demonstrate the superiority of our approach. On the WMT'14 English-German and English-French tasks, our model achieved BLEU scores of 30.95 and 44.27, respectively. Furthermore, on the OPUS multilingual machine translation task, our model surpasses a robust 3.8B DeepNet by an average of 2.9 SacreBLEU, using only 1/3 parameters. Notably, it also beats LLama models by 5.7 accuracy points on the LM Harness Evaluation.
Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning
[ "Bei Li", "Tong Zheng", "Rui Wang", "Jiahao Liu", "Qingyan Guo", "Junliang Guo", "Xu Tan", "Tong Xiao", "JingBo Zhu", "Jingang Wang", "Xunliang Cai" ]
NeurIPS.cc/2024/Conference
2411.03042
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ur00BNk1v2
@inproceedings{ wang2024genartist, title={GenArtist: Multimodal {LLM} as an Agent for Unified Image Generation and Editing}, author={Zhenyu Wang and Aoxue Li and Zhenguo Li and Xihui Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Ur00BNk1v2} }
Despite the success achieved by existing image generation and editing methods, current models still struggle with complex problems including intricate text prompts, and the absence of verification and self-correction mechanisms makes the generated images unreliable. Meanwhile, a single model tends to specialize in particular tasks and possess the corresponding capabilities, making it inadequate for fulfilling all user requirements. We propose GenArtist, a unified image generation and editing system, coordinated by a multimodal large language model (MLLM) agent. We integrate a comprehensive range of existing models into the tool library and utilize the agent for tool selection and execution. For a complex problem, the MLLM agent decomposes it into simpler sub-problems and constructs a tree structure to systematically plan the procedure of generation, editing, and self-correction with step-by-step verification. By automatically generating missing position-related inputs and incorporating position information, the appropriate tool can be effectively employed to address each sub-problem. Experiments demonstrate that GenArtist can perform various generation and editing tasks, achieving state-of-the-art performance and surpassing existing models such as SDXL and DALL-E 3, as can be seen in Fig. 1. We will open-source the code for future research and applications.
GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing
[ "Zhenyu Wang", "Aoxue Li", "Zhenguo Li", "Xihui Liu" ]
NeurIPS.cc/2024/Conference
2407.05600
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=UqvEHAnCJC
@inproceedings{ lo2024endtoend, title={End-to-End Ontology Learning with Large Language Models}, author={Andy Lo and Albert Q. Jiang and Wenda Li and Mateja Jamnik}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UqvEHAnCJC} }
Ontologies are useful for automatic machine processing of domain knowledge as they represent it in a structured format. Yet, constructing ontologies requires substantial manual effort. To automate part of this process, large language models (LLMs) have been applied to solve various subtasks of ontology learning. However, this partial ontology learning does not capture the interactions between subtasks. We address this gap by introducing OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch. Rather than focusing on subtasks, like individual relations between entities, we model entire subcomponents of the target ontology by finetuning an LLM with a custom regulariser that reduces overfitting on high-frequency concepts. We introduce a novel suite of metrics for evaluating the quality of the generated ontology by measuring its semantic and structural similarity to the ground truth. In contrast to standard metrics, our metrics use deep learning techniques to define more robust distance measures between graphs. Both our quantitative and qualitative results on Wikipedia show that OLLM outperforms subtask composition methods, producing more semantically accurate ontologies while maintaining structural integrity. We further demonstrate that our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples. Our source code and datasets are available at https://github.com/andylolu2/ollm.
End-to-End Ontology Learning with Large Language Models
[ "Andy Lo", "Albert Q. Jiang", "Wenda Li", "Mateja Jamnik" ]
NeurIPS.cc/2024/Conference
2410.23584
[ "https://github.com/andylolu2/ollm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UoxuaOGV6B
@inproceedings{ zhang2024spectral, title={Spectral Adapter: Fine-Tuning in Spectral Space}, author={Fangzhao Zhang and Mert Pilanci}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UoxuaOGV6B} }
Recent developments in Parameter-Efficient Fine-Tuning (PEFT) methods for pretrained deep neural networks have captured widespread interest. In this work, we study the enhancement of current PEFT methods by incorporating the spectral information of pretrained weight matrices into the fine-tuning procedure. We investigate two spectral adaptation mechanisms, namely additive tuning and orthogonal rotation of the top singular vectors, both are done via first carrying out Singular Value Decomposition (SVD) of pretrained weights and then fine-tuning the top spectral space. We provide a theoretical analysis of spectral fine-tuning and show that our approach improves the rank capacity of low-rank adapters given a fixed trainable parameter budget. We show through extensive experiments that the proposed fine-tuning model enables better parameter efficiency and tuning performance as well as benefits multi-adapter fusion. The source code will be open-sourced for reproducibility.
Spectral Adapter: Fine-Tuning in Spectral Space
[ "Fangzhao Zhang", "Mert Pilanci" ]
NeurIPS.cc/2024/Conference
2405.13952
[ "https://github.com/pilancilab/spectral_adapter" ]
https://huggingface.co/papers/2405.13952
2
0
1
2
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=UmW9BYj761
@inproceedings{ pouget2024no, title={No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models}, author={Ang{\'e}line Pouget and Lucas Beyer and Emanuele Bugliarello and Xiao Wang and Andreas Peter Steiner and Xiaohua Zhai and Ibrahim Alabdulmohsin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UmW9BYj761} }
We study cultural and socioeconomic diversity in contrastive vision-language models (VLMs). Using a broad range of benchmark datasets and evaluation metrics, we bring to attention several important findings. First, the common filtering of training data to English image-text pairs disadvantages communities of lower socioeconomic status and negatively impacts cultural understanding. Notably, this performance gap is not captured by - and even at odds with - the currently popular evaluation metrics derived from the Western-centric ImageNet and COCO datasets. Second, pretraining with global, unfiltered data before fine-tuning on English content can improve cultural understanding without sacrificing performance on said popular benchmarks. Third, we introduce the task of geo-localization as a novel evaluation metric to assess cultural diversity in VLMs. Our work underscores the value of using diverse data to create more inclusive multimodal systems and lays the groundwork for developing VLMs that better represent global perspectives.
No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models
[ "Angéline Pouget", "Lucas Beyer", "Emanuele Bugliarello", "Xiao Wang", "Andreas Peter Steiner", "Xiaohua Zhai", "Ibrahim Alabdulmohsin" ]
NeurIPS.cc/2024/Conference
2405.13777
[ "https://github.com/google-research/big_vision" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ul3lDYo3XQ
@inproceedings{ fengpeiyuan2024agile, title={{AGILE}: A Novel Reinforcement Learning Framework of {LLM} Agents}, author={FengPeiyuan and Yichen He and Guanhua Huang and Yuan Lin and Hanchong Zhang and Yuchen Zhang and Hang Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Ul3lDYo3XQ} }
We introduce a novel reinforcement learning framework of LLM agents named AGILE (AGent that Interacts and Learns from Environments) designed to perform complex conversational tasks with users, leveraging LLMs, memory, tools, and interactions with experts. The agent possesses capabilities beyond conversation, including reflection, tool usage, and expert consultation. We formulate the construction of such an LLM agent as a reinforcement learning (RL) problem, in which the LLM serves as the policy model. We fine-tune the LLM using labeled data of actions and the PPO algorithm. We focus on question answering and release a dataset for agents called ProductQA, comprising challenging questions in online shopping. Our extensive experiments on ProductQA, MedMCQA and HotPotQA show that AGILE agents based on 7B and 13B LLMs trained with PPO can outperform GPT-4 agents. Our ablation study highlights the indispensability of memory, tools, consultation, reflection, and reinforcement learning in achieving the agent's strong performance. Datasets and code are available at https://github.com/bytarnish/AGILE.
AGILE: A Novel Reinforcement Learning Framework of LLM Agents
[ "FengPeiyuan", "Yichen He", "Guanhua Huang", "Yuan Lin", "Hanchong Zhang", "Yuchen Zhang", "Hang Li" ]
NeurIPS.cc/2024/Conference
2405.14751
[ "https://github.com/bytarnish/agile" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UkxJd64mki
@inproceedings{ gao2024strategyllm, title={Strategy{LLM}: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving}, author={Chang Gao and Haiyun Jiang and Deng Cai and Shuming Shi and Wai Lam}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UkxJd64mki} }
Most existing prompting methods suffer from the issues of generalizability and consistency, as they often rely on instance-specific solutions that may not be applicable to other instances and lack task-level consistency across the selected few-shot examples. To address these limitations, we propose a comprehensive framework, StrategyLLM, allowing LLMs to perform inductive reasoning, deriving general strategies from specific task instances, and deductive reasoning, applying these general strategies to particular task examples, for constructing generalizable and consistent few-shot prompts. It employs four LLM-based agents: strategy generator, executor, optimizer, and evaluator, working together to generate, evaluate, and select promising strategies for a given task. Experimental results demonstrate that StrategyLLM outperforms the competitive baseline CoT-SC that requires human-annotated solutions on 13 datasets across 4 challenging tasks without human involvement, including math reasoning (34.2\% $\rightarrow$ 38.8\%), commonsense reasoning (70.3\% $\rightarrow$ 72.5\%), algorithmic reasoning (73.7\% $\rightarrow$ 85.0\%), and symbolic reasoning (30.0\% $\rightarrow$ 79.2\%). Further analysis reveals that StrategyLLM is applicable to various LLMs and demonstrates advantages across numerous scenarios.
StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving
[ "Chang Gao", "Haiyun Jiang", "Deng Cai", "Shuming Shi", "Wai Lam" ]
NeurIPS.cc/2024/Conference
2311.08803
[ "https://github.com/gao-xiao-bai/strategyllm" ]
https://huggingface.co/papers/2311.08803
0
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=UkauUrTbxx
@inproceedings{ hou2024protransformer, title={ProTransformer: Robustify Transformers via Plug-and-Play Paradigm}, author={Zhichao Hou and Weizhi Gao and Yuchen Shen and Feiyi Wang and Xiaorui Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UkauUrTbxx} }
Transformer-based architectures have dominated various areas of machine learning in recent years. In this paper, we introduce a novel robust attention mechanism designed to enhance the resilience of transformer-based architectures. Crucially, this technique can be integrated into existing transformers as a plug-and-play layer, improving their robustness without the need for additional training or fine-tuning. Through comprehensive experiments and ablation studies, we demonstrate that our ProTransformer significantly enhances the robustness of transformer models across a variety of prediction tasks, attack mechanisms, backbone architectures, and data domains. Notably, without further fine-tuning, the ProTransformer consistently improves the performance of vanilla transformers by 19.5\%, 28.3\%, 16.1\%, and 11.4\% for BERT, ALBERT, DistilBERT, and RoBERTa, respectively, under the classical TextFooler attack. Furthermore, ProTransformer shows promising resilience in large language models (LLMs) against prompting-based attacks, improving the performance of T5 and LLaMA by 24.8\% and 17.8\%, respectively, and enhancing Vicuna by an average of 10.4\% against the Jailbreaking attack. Beyond the language domain, ProTransformer also demonstrates outstanding robustness in both vision and graph domains.
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm
[ "Zhichao Hou", "Weizhi Gao", "Yuchen Shen", "Feiyi Wang", "Xiaorui Liu" ]
NeurIPS.cc/2024/Conference
2410.23182
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ujo8V7iXmR
@inproceedings{ hajiaghayi2024ad, title={Ad Auctions for {LLM}s via Retrieval Augmented Generation}, author={MohammadTaghi Hajiaghayi and Sebastien Lahaie and Keivan Rezaei and Suho Shin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Ujo8V7iXmR} }
In the field of computational advertising, the integration of ads into the outputs of large language models (LLMs) presents an opportunity to support these services without compromising content integrity. This paper introduces novel auction mechanisms for ad allocation and pricing within the textual outputs of LLMs, leveraging retrieval-augmented generation (RAG). We propose a \emph{segment auction} where an ad is probabilistically retrieved for each discourse segment (paragraph, section, or entire output) according to its bid and relevance, following the RAG framework, and priced according to competing bids. We show that our auction maximizes logarithmic social welfare, a new notion of welfare that balances allocation efficiency and fairness, and we characterize the associated incentive-compatible pricing rule. These results are extended to multi-ad allocation per segment. An empirical evaluation validates the feasibility and effectiveness of our approach over several ad auction scenarios, and exhibits inherent tradeoffs in metrics as we allow the LLM more flexibility to allocate ads.
Ad Auctions for LLMs via Retrieval Augmented Generation
[ "MohammadTaghi Hajiaghayi", "Sebastien Lahaie", "Keivan Rezaei", "Suho Shin" ]
NeurIPS.cc/2024/Conference
2406.09459
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UixTytSVOl
@inproceedings{ fan2024crossmodal, title={Cross-modal Representation Flattening for Multi-modal Domain Generalization}, author={Yunfeng FAN and Wenchao Xu and Haozhao Wang and Song Guo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UixTytSVOl} }
Multi-modal domain generalization (MMDG) requires that models trained on multi-modal source domains can generalize to unseen target distributions with the same modality set. Sharpness-aware minimization (SAM) is an effective technique for traditional uni-modal domain generalization (DG), however, with limited improvement in MMDG. In this paper, we identify that modality competition and discrepant uni-modal flatness are two main factors that restrict multi-modal generalization. To overcome these challenges, we propose to construct consistent flat loss regions and enhance knowledge exploitation for each modality via cross-modal knowledge transfer. Firstly, we turn to the optimization on representation-space loss landscapes instead of traditional parameter space, which allows us to build connections between modalities directly. Then, we introduce a novel method to flatten the high-loss region between minima from different modalities by interpolating mixed multi-modal representations. We implement this method by distilling and optimizing generalizable interpolated representations and assigning distinct weights for each modality considering their divergent generalization capabilities. Extensive experiments are performed on two benchmark datasets, EPIC-Kitchens and Human-Animal-Cartoon (HAC), with various modality combinations, demonstrating the effectiveness of our method under multi-source and single-source settings. Our code is open-sourced.
Cross-modal Representation Flattening for Multi-modal Domain Generalization
[ "Yunfeng FAN", "Wenchao Xu", "Haozhao Wang", "Song Guo" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UiQkFXLfbu
@inproceedings{ behari2024a, title={A Decision-Language Model ({DLM}) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health}, author={Nikhil Behari and Edwin Zhang and YUNFAN ZHAO and Aparna Taneja and Dheeraj Mysore Nagaraj and Milind Tambe}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UiQkFXLfbu} }
Restless multi-armed bandits (RMAB) have demonstrated success in optimizing resource allocation for large beneficiary populations in public health settings. Unfortunately, RMAB models lack flexibility to adapt to evolving public health policy priorities. Concurrently, Large Language Models (LLMs) have emerged as adept automated planners across domains of robotic control and navigation. In this paper, we propose a Decision Language Model (DLM) for RMABs, enabling dynamic fine-tuning of RMAB policies in public health settings using human-language commands. We propose using LLMs as automated planners to (1) interpret human policy preference prompts, (2) propose reward functions as code for a multi-agent RMAB environment, and (3) iterate on the generated reward functions using feedback from grounded RMAB simulations. We illustrate the application of DLM in collaboration with ARMMAN, an India-based non-profit promoting preventative care for pregnant mothers, that currently relies on RMAB policies to optimally allocate health worker calls to low-resource populations. We conduct a technology demonstration in simulation using the Gemini Pro model, showing DLM can dynamically shape policy outcomes using only human prompts as input.
A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health
[ "Nikhil Behari", "Edwin Zhang", "YUNFAN ZHAO", "Aparna Taneja", "Dheeraj Mysore Nagaraj", "Milind Tambe" ]
NeurIPS.cc/2024/Conference
2402.14807
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ugr0yPzY71
@inproceedings{ cascioli2024faster, title={Faster Repeated Evasion Attacks in Tree Ensembles}, author={Lorenzo Cascioli and Laurens Devos and Ondrej Kuzelka and Jesse Davis}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Ugr0yPzY71} }
Tree ensembles are one of the most widely used model classes. However, these models are susceptible to adversarial examples, i.e., slightly perturbed examples that elicit a misprediction. There has been significant research on designing approaches to construct such examples for tree ensembles. But this is a computationally challenging problem that often must be solved a large number of times (e.g., for all examples in a training set). This is compounded by the fact that current approaches attempt to find such examples from scratch. In contrast, we exploit the fact that multiple similar problems are being solved. Specifically, our approach exploits the insight that adversarial examples for tree ensembles tend to perturb a consistent but relatively small set of features. We show that we can quickly identify this set of features and use this knowledge to speedup constructing adversarial examples.
Faster Repeated Evasion Attacks in Tree Ensembles
[ "Lorenzo Cascioli", "Laurens Devos", "Ondrej Kuzelka", "Jesse Davis" ]
NeurIPS.cc/2024/Conference
2402.08586
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UfLH4T676K
@inproceedings{ li2024improving, title={Improving Adaptivity via Over-Parameterization in Sequence Models}, author={Yicheng Li and Qian Lin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UfLH4T676K} }
It is well known that eigenfunctions of a kernel play a crucial role in kernel regression. Through several examples, we demonstrate that even with the same set of eigenfunctions, the order of these functions significantly impacts regression outcomes. Simplifying the model by diagonalizing the kernel, we introduce an over-parameterized gradient descent in the realm of sequence model to capture the effects of various orders of a fixed set of eigen-functions. This method is designed to explore the impact of varying eigenfunction orders. Our theoretical results show that the over-parameterization gradient flow can adapt to the underlying structure of the signal and significantly outperform the vanilla gradient flow method. Moreover, we also demonstrate that deeper over-parameterization can further enhance the generalization capability of the model. These results not only provide a new perspective on the benefits of over-parameterization and but also offer insights into the adaptivity and generalization potential of neural networks beyond the kernel regime.
Improving Adaptivity via Over-Parameterization in Sequence Models
[ "Yicheng Li", "Qian Lin" ]
NeurIPS.cc/2024/Conference
2409.00894
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UekHycx0lz
@inproceedings{ yu2024dreamsteerer, title={DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models}, author={Zhengyang Yu and Zhaoyuan Yang and Jing Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UekHycx0lz} }
Recent text-to-image (T2I) personalization methods have shown great premise in teaching a diffusion model user-specified concepts given a few images for reusing the acquired concepts in a novel context. With massive efforts being dedicated to personalized generation, a promising extension is personalized editing, namely to edit an image using personalized concepts, which can provide more precise guidance signal than traditional textual guidance. To address this, one straightforward solution is to incorporate a personalized diffusion model with a text-driven editing framework. However, such solution often shows unsatisfactory editability on the source image. To address this, we propose DreamSteerer, a plug-in method for augmenting existing T2I personalization methods. Specifically, we enhance the source image conditioned editability of a personalized diffusion model via a novel Editability Driven Score Distillation (EDSD) objective. Moreover, we identify a mode trapping issue with EDSD, and propose a mode shifting regularization with spatial feature guided sampling to avoid such issue. We further employ two key modifications on the Delta Denoising Score framework that enable high-fidelity local editing with personalized concepts. Extensive experiments validate that DreamSteerer can significantly improve the editability of several T2I personalization baselines while being computationally efficient.
DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models
[ "Zhengyang Yu", "Zhaoyuan Yang", "Jing Zhang" ]
NeurIPS.cc/2024/Conference
2410.11208
[ "https://github.com/Dijkstra14/DreamSteerer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UdxpjKO2F9
@inproceedings{ teoh2024improving, title={Improving Environment Novelty Quantification for Effective Unsupervised Environment Design}, author={Jayden Teoh and Wenjun Li and Pradeep Varakantham}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UdxpjKO2F9} }
Unsupervised Environment Design (UED) formalizes the problem of autocurricula through interactive training between a teacher agent and a student agent. The teacher generates new training environments with high learning potential, curating an adaptive curriculum that strengthens the student's ability to handle unseen scenarios. Existing UED methods mainly rely on *regret*, a metric that measures the difference between the agent's optimal and actual performance, to guide curriculum design. Regret-driven methods generate curricula that progressively increase environment complexity for the student but overlook environment *novelty* — a critical element for enhancing an agent's generalizability. Measuring environment novelty is especially challenging due to the underspecified nature of environment parameters in UED, and existing approaches face significant limitations. To address this, this paper introduces the *Coverage-based Evaluation of Novelty In Environment* (CENIE) framework. CENIE proposes a scalable, domain-agnostic, and curriculum-aware approach to quantifying environment novelty by leveraging the student's state-action space coverage from previous curriculum experiences. We then propose an implementation of CENIE that models this coverage and measures environment novelty using Gaussian Mixture Models. By integrating both regret and novelty as complementary objectives for curriculum design, CENIE facilitates effective exploration across the state-action space while progressively increasing curriculum complexity. Empirical evaluations demonstrate that augmenting existing regret-based UED algorithms with CENIE achieves state-of-the-art performance across multiple benchmarks, underscoring the effectiveness of novelty-driven autocurricula for robust generalization.
Improving Environment Novelty Quantification for Effective Unsupervised Environment Design
[ "Jayden Teoh", "Wenjun Li", "Pradeep Varakantham" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=UddVRqTrjt
@inproceedings{ nehme2024hierarchical, title={Hierarchical Uncertainty Exploration via Feedforward Posterior Trees}, author={Elias Nehme and Rotem Mulayoff and Tomer Michaeli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UddVRqTrjt} }
When solving ill-posed inverse problems, one often desires to explore the space of potential solutions rather than be presented with a single plausible reconstruction. Valuable insights into these feasible solutions and their associated probabilities are embedded in the posterior distribution. However, when confronted with data of high dimensionality (such as images), visualizing this distribution becomes a formidable challenge, necessitating the application of effective summarization techniques before user examination. In this work, we introduce a new approach for visualizing posteriors across multiple levels of granularity using *tree*-valued predictions. Our method predicts a tree-valued hierarchical summarization of the posterior distribution for any input measurement, in a single forward pass of a neural network. We showcase the efficacy of our approach across diverse datasets and image restoration challenges, highlighting its prowess in uncertainty quantification and visualization. Our findings reveal that our method performs comparably to a baseline that hierarchically clusters samples from a diffusion-based posterior sampler, yet achieves this with orders of magnitude greater speed. Code and examples are available at our [webpage](https://eliasnehme.github.io/PosteriorTrees/).
Hierarchical Uncertainty Exploration via Feedforward Posterior Trees
[ "Elias Nehme", "Rotem Mulayoff", "Tomer Michaeli" ]
NeurIPS.cc/2024/Conference
2405.15719
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UdXE5V2d0O
@inproceedings{ park2024direct, title={Direct Unlearning Optimization for Robust and Safe Text-to-Image Models}, author={Yong-Hyun Park and Sangdoo Yun and Jin-Hwa Kim and Junho Kim and Geonhui Jang and Yonghyun Jeong and Junghyo Jo and Gayoung Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UdXE5V2d0O} }
Recent advancements in text-to-image (T2I) models have greatly benefited from large-scale datasets, but they also pose significant risks due to the potential generation of unsafe content. To mitigate this issue, researchers proposed unlearning techniques that attempt to induce the model to unlearn potentially harmful prompts. However, these methods are easily bypassed by adversarial attacks, making them unreliable for ensuring the safety of generated images. In this paper, we propose Direct Unlearning Optimization (DUO), a novel framework for removing NSFW content from T2I models while preserving their performance on unrelated topics. DUO employs a preference optimization approach using curated paired image data, ensuring that the model learns to remove unsafe visual concepts while retain unrelated features. Furthermore, we introduce an output-preserving regularization term to maintain the model's generative capabilities on safe content. Extensive experiments demonstrate that DUO can robustly defend against various state-of-the-art red teaming methods without significant performance degradation on unrelated topics, as measured by FID and CLIP scores. Our work contributes to the development of safer and more reliable T2I models, paving the way for their responsible deployment in both closed-source and open-source scenarios.
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models
[ "Yong-Hyun Park", "Sangdoo Yun", "Jin-Hwa Kim", "Junho Kim", "Geonhui Jang", "Yonghyun Jeong", "Junghyo Jo", "Gayoung Lee" ]
NeurIPS.cc/2024/Conference
2407.21035
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UcdaNf2PKL
@inproceedings{ zhao2024avernet, title={AverNet: All-in-one Video Restoration for Time-varying Unknown Degradations}, author={Haiyu Zhao and Lei Tian and Xinyan Xiao and Peng Hu and Yuanbiao Gou and Xi Peng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UcdaNf2PKL} }
Traditional video restoration approaches were designed to recover clean videos from a specific type of degradation, making them ineffective in handling multiple unknown types of degradation. To address this issue, several studies have been conducted and have shown promising results. However, these studies overlook that the degradations in video usually change over time, dubbed time-varying unknown degradations (TUD). To tackle such a less-touched challenge, we propose an innovative method, termed as All-in-one VidEo Restoration Network (AverNet), which comprises two core modules, i.e., Prompt-Guided Alignment (PGA) module and Prompt-Conditioned Enhancement (PCE) module. Specifically, PGA addresses the issue of pixel shifts caused by time-varying degradations by learning and utilizing prompts to align video frames at the pixel level. To handle multiple unknown degradations, PCE recasts it into a conditional restoration problem by implicitly establishing a conditional map between degradations and ground truths. Thanks to the collaboration between PGA and PCE modules, AverNet empirically demonstrates its effectiveness in recovering videos from TUD. Extensive experiments are carried out on two synthesized datasets featuring seven types of degradations with random corruption levels. The code is available at https://github.com/XLearning-SCU/2024-NeurIPS-AverNet.
AverNet: All-in-one Video Restoration for Time-varying Unknown Degradations
[ "Haiyu Zhao", "Lei Tian", "Xinyan Xiao", "Peng Hu", "Yuanbiao Gou", "Xi Peng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UahrHR5HQh
@inproceedings{ eijkelboom2024variational, title={Variational Flow Matching for Graph Generation}, author={Floor Eijkelboom and Grigory Bartosh and Christian A. Naesseth and Max Welling and Jan-Willem van de Meent}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UahrHR5HQh} }
We present a formulation of flow matching as variational inference, which we refer to as variational flow matching (VFM). We use this formulation to develop CatFlow, a flow matching method for categorical data that is easy to implement, computationally efficient, and achieves strong results on graph generation tasks. In VFM, the objective is to approximate the posterior probability path, which is a distribution over possible end points of a trajectory. VFM admits both the original flow matching objective and the CatFlow objective as special cases. We also relate VFM to score-based models, in which the dynamics are stochastic rather than deterministic, and derive a bound on the model likelihood based on a reweighted VFM objective. We evaluate CatFlow on one abstract graph generation task and two molecular generation tasks. In all cases, CatFlow exceeds or matches performance of the current state-of-the-art models.
Variational Flow Matching for Graph Generation
[ "Floor Eijkelboom", "Grigory Bartosh", "Christian A. Naesseth", "Max Welling", "Jan-Willem van de Meent" ]
NeurIPS.cc/2024/Conference
2406.04843
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UaJErAOssN
@inproceedings{ li2024state, title={State Space Models on Temporal Graphs: A First-Principles Study}, author={Jintang Li and Ruofan Wu and Xinzhou Jin and Boqun Ma and Liang Chen and Zibin Zheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UaJErAOssN} }
Over the past few years, research on deep graph learning has shifted from static graphs to temporal graphs in response to real-world complex systems that exhibit dynamic behaviors. In practice, temporal graphs are formalized as an ordered sequence of static graph snapshots observed at discrete time points. Sequence models such as RNNs or Transformers have long been the predominant backbone networks for modeling such temporal graphs. Yet, despite the promising results, RNNs struggle with long-range dependencies, while transformers are burdened by quadratic computational complexity. Recently, state space models (SSMs), which are framed as discretized representations of an underlying continuous-time linear dynamical system, have garnered substantial attention and achieved breakthrough advancements in independent sequence modeling. In this work, we undertake a principled investigation that extends SSM theory to temporal graphs by integrating structural information into the online approximation objective via the adoption of a Laplacian regularization term. The emergent continuous-time system introduces novel algorithmic challenges, thereby necessitating our development of GraphSSM, a graph state space model for modeling the dynamics of temporal graphs. Extensive experimental results demonstrate the effectiveness of our GraphSSM framework across various temporal graph benchmarks.
State Space Models on Temporal Graphs: A First-Principles Study
[ "Jintang Li", "Ruofan Wu", "Xinzhou Jin", "Boqun Ma", "Liang Chen", "Zibin Zheng" ]
NeurIPS.cc/2024/Conference
2406.00943
[ "https://github.com/edisonleeeee/graphssm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UZIHW8eFRp
@inproceedings{ liu2024a, title={A Tractable Inference Perspective of Offline {RL}}, author={Xuejie Liu and Anji Liu and Guy Van den Broeck and Yitao Liang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UZIHW8eFRp} }
A popular paradigm for offline Reinforcement Learning (RL) tasks is to first fit the offline trajectories to a sequence model, and then prompt the model for actions that lead to high expected return. In addition to obtaining accurate sequence models, this paper highlights that tractability, the ability to exactly and efficiently answer various probabilistic queries, plays an important role in offline RL. Specifically, due to the fundamental stochasticity from the offline data-collection policies and the environment dynamics, highly non-trivial conditional/constrained generation is required to elicit rewarding actions. While it is still possible to approximate such queries, we observe that such crude estimates undermine the benefits brought by expressive sequence models. To overcome this problem, this paper proposes Trifle (Tractable Inference for Offline RL), which leverages modern tractable generative models to bridge the gap between good sequence models and high expected returns at evaluation time. Empirically, Trifle achieves $7$ state-of-the-art scores and the highest average scores in $9$ Gym-MuJoCo benchmarks against strong baselines. Further, Trifle significantly outperforms prior approaches in stochastic environments and safe RL tasks with minimum algorithmic modifications.
A Tractable Inference Perspective of Offline RL
[ "Xuejie Liu", "Anji Liu", "Guy Van den Broeck", "Yitao Liang" ]
NeurIPS.cc/2024/Conference
2311.00094
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UXuBzWoZGK
@inproceedings{ kwa2024catastrophic, title={Catastrophic Goodhart: regularizing {RLHF} with {KL} divergence does not mitigate heavy-tailed reward misspecification}, author={Thomas Kwa and Drake Thomas and Adri{\`a} Garriga-Alonso}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UXuBzWoZGK} }
When applying reinforcement learning from human feedback (RLHF), the reward is learned from data and, therefore, always has some error. It is common to mitigate this by regularizing the policy with KL divergence from a base model, with the hope that balancing reward with regularization will achieve desirable outcomes despite this reward misspecification. We show that when the reward function has light-tailed error, optimal policies under less restrictive KL penalties achieve arbitrarily high utility. However, if error is heavy-tailed, some policies obtain arbitrarily high reward despite achieving no more utility than the base model--a phenomenon we call catastrophic Goodhart. We adapt a discrete optimization method to measure the tails of reward models, finding that they are consistent with light-tailed error. However, the pervasiveness of heavy-tailed distributions in many real-world applications indicates that future sources of RL reward could have heavy-tailed error, increasing the likelihood of reward hacking even with KL regularization.
Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification
[ "Thomas Kwa", "Drake Thomas", "Adrià Garriga-Alonso" ]
NeurIPS.cc/2024/Conference
2407.14503
[ "https://github.com/tkwa/catastrophic-goodhart" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UXEo3uNNIX
@inproceedings{ li2024coupled, title={Coupled Mamba: Enhanced Multimodal Fusion with Coupled State Space Model}, author={Wenbing Li and Hang Zhou and Junqing Yu and Zikai Song and Wei Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UXEo3uNNIX} }
The essence of multi-modal fusion lies in exploiting the complementary information inherent in diverse modalities.However, most prevalent fusion methods rely on traditional neural architectures and are inadequately equipped to capture the dynamics of interactions across modalities, particularly in presence of complex intra- and inter-modality correlations.Recent advancements in State Space Models (SSMs), notably exemplified by the Mamba model, have emerged as promising contenders. Particularly, its state evolving process implies stronger modality fusion paradigm, making multi-modal fusion on SSMs an appealing direction. However, fusing multiple modalities is challenging for SSMs due to its hardware-aware parallelism designs. To this end, this paper proposes the Coupled SSM model, for coupling state chains of multiple modalities while maintaining independence of intra-modality state processes. Specifically, in our coupled scheme, we devise an inter-modal hidden states transition scheme, in which the current state is dependent on the states of its own chain and that of the neighbouring chains at the previous time-step. To fully comply with the hardware-aware parallelism, we obtain the global convolution kernel by deriving the state equation while introducing the historical state.Extensive experiments on CMU-MOSEI, CH-SIMS, CH-SIMSV2 through multi-domain input verify the effectiveness of our model compared to current state-of-the-art methods, improved F1-Score by 0.4%, 0.9%, and 2.3% on the three datasets respectively, 49% faster inference and 83.7% GPU memory save. The results demonstrate that Coupled Mamba model is capable of enhanced multi-modal fusion.
Coupled Mamba: Enhanced Multimodal Fusion with Coupled State Space Model
[ "Wenbing Li", "Hang Zhou", "Junqing Yu", "Zikai Song", "Wei Yang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UWUUVKtKeu
@inproceedings{ ding2024diffusionbased, title={Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization}, author={Shutong Ding and Ke Hu and Zhenhao Zhang and Kan Ren and Weinan Zhang and Jingyi Yu and Jingya Wang and Ye Shi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UWUUVKtKeu} }
Diffusion models have garnered widespread attention in Reinforcement Learning (RL) for their powerful expressiveness and multimodality. It has been verified that utilizing diffusion policies can significantly improve the performance of RL algorithms in continuous control tasks by overcoming the limitations of unimodal policies, such as Gaussian policies. Furthermore, the multimodality of diffusion policies also shows the potential of providing the agent with enhanced exploration capabilities. However, existing works mainly focus on applying diffusion policies in offline RL, while their incorporation into online RL has been less investigated. The diffusion model's training objective, known as the variational lower bound, cannot be applied directly in online RL due to the unavailability of 'good' samples (actions). To harmonize the diffusion model with online RL, we propose a novel model-free diffusion-based online RL algorithm named Q-weighted Variational Policy Optimization (QVPO). Specifically, we introduce the Q-weighted variational loss and its approximate implementation in practice. Notably, this loss is shown to be a tight lower bound of the policy objective. To further enhance the exploration capability of the diffusion policy, we design a special entropy regularization term. Unlike Gaussian policies, the log-likelihood in diffusion policies is inaccessible; thus this entropy term is nontrivial. Moreover, to reduce the large variance of diffusion policies, we also develop an efficient behavior policy through action selection. This can further improve its sample efficiency during online interaction. Consequently, the QVPO algorithm leverages the exploration capabilities and multimodality of diffusion policies, preventing the RL agent from converging to a sub-optimal policy. To verify the effectiveness of QVPO, we conduct comprehensive experiments on MuJoCo continuous control benchmarks. The final results demonstrate that QVPO achieves state-of-the-art performance in terms of both cumulative reward and sample efficiency.
Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization
[ "Shutong Ding", "Ke Hu", "Zhenhao Zhang", "Kan Ren", "Weinan Zhang", "Jingyi Yu", "Jingya Wang", "Ye Shi" ]
NeurIPS.cc/2024/Conference
2405.16173
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UVjuYBSbCN
@inproceedings{ lee2024toward, title={Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning}, author={Dongjoon Lee and Hyeryn Park and Changhee Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UVjuYBSbCN} }
Previous deep learning approaches for survival analysis have primarily relied on ranking losses to improve discrimination performance, which often comes at the expense of calibration performance. To address such an issue, we propose a novel contrastive learning approach specifically designed to enhance discrimination without sacrificing calibration. Our method employs weighted sampling within a contrastive learning framework, assigning lower penalties to samples with similar survival outcomes. This aligns well with the assumption that patients with similar event times share similar clinical statuses. Consequently, when augmented with the commonly used negative log-likelihood loss, our approach significantly improves discrimination performance without directly manipulating the model outputs, thereby achieving better calibration. Experiments on multiple real-world clinical datasets demonstrate that our method outperforms state-of-the-art deep survival models in both discrimination and calibration. Through comprehensive ablation studies, we further validate the effectiveness of our approach through quantitative and qualitative analyses.
Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning
[ "Dongjoon Lee", "Hyeryn Park", "Changhee Lee" ]
NeurIPS.cc/2024/Conference
2410.11340
[ "https://github.com/dongzza97/ConSurv" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UVAq3uJ0gc
@inproceedings{ liu2024gradientfree, title={Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization}, author={Zhuanghua Liu and Luo Luo and Bryan Kian Hsiang Low}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UVAq3uJ0gc} }
The stochastic compositional optimization (SCO) is popular in many real-world applications, including risk management, reinforcement learning, and meta-learning. However, most of the previous methods for SCO require the smoothness assumption on both the outer and inner functions, which limits their applications to a wider range of problems. In this paper, we study the SCO problem in that both the outer and inner functions are Lipschitz continuous but possibly nonconvex and nonsmooth. In particular, we propose gradient-free stochastic methods for finding the $(\delta, \epsilon)$-Goldstein stationary points of such problems with non-asymptotic convergence rates. Our results also lead to an improved convergence rate for the convex nonsmooth SCO problem. Furthermore, we conduct numerical experiments to demonstrate the effectiveness of the proposed methods.
Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization
[ "Zhuanghua Liu", "Luo Luo", "Bryan Kian Hsiang Low" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UTrIEHobXI
@inproceedings{ song2024geometry, title={Geometry Cloak: Preventing {TGS}-based 3D Reconstruction from Copyrighted Images}, author={Qi Song and Ziyuan Luo and Ka Chun Cheung and Simon See and Renjie Wan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UTrIEHobXI} }
Single-view 3D reconstruction methods like Triplane Gaussian Splatting (TGS) have enabled high-quality 3D model generation from just a single image input within seconds. However, this capability raises concerns about potential misuse, where malicious users could exploit TGS to create unauthorized 3D models from copyrighted images. To prevent such infringement, we propose a novel image protection approach that embeds invisible geometry perturbations, termed ``geometry cloaks'', into images before supplying them to TGS. These carefully crafted perturbations encode a customized message that is revealed when TGS attempts 3D reconstructions of the cloaked image. Unlike conventional adversarial attacks that simply degrade output quality, our method forces TGS to fail the 3D reconstruction in a specific way - by generating an identifiable customized pattern that acts as a watermark. This watermark allows copyright holders to assert ownership over any attempted 3D reconstructions made from their protected images. Extensive experiments have verified the effectiveness of our geometry cloak.
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images
[ "Qi Song", "Ziyuan Luo", "Ka Chun Cheung", "Simon See", "Renjie Wan" ]
NeurIPS.cc/2024/Conference
2410.22705
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UTNZKl5BUc
@inproceedings{ saberi2024gradual, title={Gradual Domain Adaptation via Manifold-Constrained Distributionally Robust Optimization}, author={seyed amir hossein saberi and Amir Najafi and Amin Behjati and Ala Emrani and Yasaman Zolfimoselo and Mahdi Shadrooy and Abolfazl Motahari and Babak Khalaj}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UTNZKl5BUc} }
The aim of this paper is to address the challenge of gradual domain adaptation within a class of manifold-constrained data distributions. In particular, we consider a sequence of $T\ge2$ data distributions $P_1,\ldots,P_T$ undergoing a gradual shift, where each pair of consecutive measures $P_i,P_{i+1}$ are close to each other in Wasserstein distance. We have a supervised dataset of size $n$ sampled from $P_0$, while for the subsequent distributions in the sequence, only unlabeled i.i.d. samples are available. Moreover, we assume that all distributions exhibit a known favorable attribute, such as (but not limited to) having intra-class soft/hard margins. In this context, we propose a methodology rooted in Distributionally Robust Optimization (DRO) with an adaptive Wasserstein radius. We theoretically show that this method guarantees the classification error across all $P_i$s can be suitably bounded. Our bounds rely on a newly introduced {\it {compatibility}} measure, which fully characterizes the error propagation dynamics along the sequence. Specifically, for inadequately constrained distributions, the error can exponentially escalate as we progress through the gradual shifts. Conversely, for appropriately constrained distributions, the error can be demonstrated to be linear or even entirely eradicated. We have substantiated our theoretical findings through several experimental results.
Gradual Domain Adaptation via Manifold-Constrained Distributionally Robust Optimization
[ "seyed amir hossein saberi", "Amir Najafi", "Amin Behjati", "Ala Emrani", "Yasaman Zolfimoselo", "Mahdi Shadrooy", "Abolfazl Motahari", "Babak Khalaj" ]
NeurIPS.cc/2024/Conference
2410.14061
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=URyeU8mwz1
@inproceedings{ merlis2024the, title={The Value of Reward Lookahead in Reinforcement Learning}, author={Nadav Merlis and Dorian Baudry and Vianney Perchet}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=URyeU8mwz1} }
In reinforcement learning (RL), agents sequentially interact with changing environments while aiming to maximize the obtained rewards. Usually, rewards are observed only _after_ acting, and so the goal is to maximize the _expected_ cumulative reward. Yet, in many practical settings, reward information is observed in advance -- prices are observed before performing transactions; nearby traffic information is partially known; and goals are oftentimes given to agents prior to the interaction. In this work, we aim to quantifiably analyze the value of such future reward information through the lens of _competitive analysis. In particular, we measure the ratio between the value of standard RL agents and that of agents with partial future-reward lookahead. We characterize the worst-case reward distribution and derive exact ratios for the worst-case reward expectations. Surprisingly, the resulting ratios relate to known quantities in offline RL and reward-free exploration. We further provide tight bounds for the ratio given the worst-case dynamics. Our results cover the full spectrum between observing the immediate rewards before acting to observing all the rewards before the interaction starts.
The Value of Reward Lookahead in Reinforcement Learning
[ "Nadav Merlis", "Dorian Baudry", "Vianney Perchet" ]
NeurIPS.cc/2024/Conference
2403.11637
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=URQXbwM0Md
@inproceedings{ song2024cryptographic, title={Cryptographic Hardness of Score Estimation}, author={Min Jae Song}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=URQXbwM0Md} }
We show that L2-accurate score estimation, in the absence of strong assumptions on the data distribution, is computationally hard even when sample complexity is polynomial in the relevant problem parameters. Our reduction builds on the result of Chen et al. (ICLR 2023), who showed that the problem of generating samples from an unknown data distribution reduces to L2-accurate score estimation. Our hard-to-estimate distributions are the "Gaussian pancakes" distributions, originally due to Diakonikolas et al. (FOCS 2017), which have been shown to be computationally indistinguishable from the standard Gaussian under widely believed hardness assumptions from lattice-based cryptography (Bruna et al., STOC 2021; Gupte et al., FOCS 2022).
Cryptographic Hardness of Score Estimation
[ "Min Jae Song" ]
NeurIPS.cc/2024/Conference
2404.03272
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UQflshLbZv
@inproceedings{ zeng2024hairdiffusion, title={HairDiffusion: Vivid Multi-Colored Hair Editing via Latent Diffusion}, author={Yu Zeng and Yang Zhang and Jiachen Liu and Linlin Shen and Kaijun Deng and Weizhao He and Jinbao Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UQflshLbZv} }
Hair editing is a critical image synthesis task that aims to edit hair color and hairstyle using text descriptions or reference images, while preserving irrelevant attributes (e.g., identity, background, cloth). Many existing methods are based on StyleGAN to address this task. However, due to the limited spatial distribution of StyleGAN, it struggles with multiple hair color editing and facial preservation. Considering the advancements in diffusion models, we utilize Latent Diffusion Models (LDMs) for hairstyle editing. Our approach introduces Multi-stage Hairstyle Blend (MHB), effectively separating control of hair color and hairstyle in diffusion latent space. Additionally, we train a warping module to align the hair color with the target region. To further enhance multi-color hairstyle editing, we fine-tuned a CLIP model using a multi-color hairstyle dataset. Our method not only tackles the complexity of multi-color hairstyles but also addresses the challenge of preserving original colors during diffusion editing. Extensive experiments showcase the superiority of our method in editing multi-color hairstyles while preserving facial attributes given textual descriptions and reference images.
HairDiffusion: Vivid Multi-Colored Hair Editing via Latent Diffusion
[ "Yu Zeng", "Yang Zhang", "Jiachen Liu", "Linlin Shen", "Kaijun Deng", "Weizhao He", "Jinbao Wang" ]
NeurIPS.cc/2024/Conference
2410.21789
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UPxmISfNCO
@inproceedings{ sun2024efficiency, title={Efficiency for Free: Ideal Data Are Transportable Representations}, author={Peng Sun and Yi Jiang and Tao Lin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UPxmISfNCO} }
Data, the seminal opportunity and challenge in modern machine learning, currently constrains the scalability of representation learning and impedes the pace of model evolution. In this work, we investigate the efficiency properties of data from both optimization and generalization perspectives. Our theoretical and empirical analysis reveals an unexpected finding: for a given task, utilizing a publicly available, task- and architecture-agnostic model (referred to as the `prior model' in this paper) can effectively produce efficient data. Building on this insight, we propose the Representation Learning Accelerator (ReLA), which promotes the formation and utilization of efficient data, thereby accelerating representation learning. Utilizing a ResNet-18 pre-trained on CIFAR-10 as a prior model to inform ResNet-50 training on ImageNet-1K reduces computational costs by $50\%$ while maintaining the same accuracy as the model trained with the original BYOL, which requires $100\%$ cost. Our code is available at: \url{https://github.com/LINs-lab/ReLA}.
Efficiency for Free: Ideal Data Are Transportable Representations
[ "Peng Sun", "Yi Jiang", "Tao Lin" ]
NeurIPS.cc/2024/Conference
2405.14669
[ "https://github.com/lins-lab/rela" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UPxFYvHsyN
@inproceedings{ biswas2024tfsnerf, title={{TFS}-Ne{RF}: Template-Free Ne{RF} for Semantic 3D Reconstruction of Dynamic Scene}, author={Sandika Biswas and Qianyi Wu and Biplab Banerjee and Hamid Rezatofighi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UPxFYvHsyN} }
Despite advancements in Neural Implicit models for 3D surface reconstruction, handling dynamic environments with interactions between arbitrary rigid, non-rigid, or deformable entities remains challenging. The generic reconstruction methods adaptable to such dynamic scenes often require additional inputs like depth or optical flow or rely on pre-trained image features for reasonable outcomes. These methods typically use latent codes to capture frame-by-frame deformations. Another set of dynamic scene reconstruction methods, are entity-specific, mostly focusing on humans, and relies on template models. In contrast, some template-free methods bypass these requirements and adopt traditional LBS (Linear Blend Skinning) weights for a detailed representation of deformable object motions, although they involve complex optimizations leading to lengthy training times. To this end, as a remedy, this paper introduces TFS-NeRF, a template-free 3D semantic NeRF for dynamic scenes captured from sparse or single-view RGB videos, featuring interactions among two entities and more time-efficient than other LBS-based approaches. Our framework uses an Invertible Neural Network (INN) for LBS prediction, simplifying the training process. By disentangling the motions of interacting entities and optimizing per-entity skinning weights, our method efficiently generates accurate, semantically separable geometries. Extensive experiments demonstrate that our approach produces high-quality reconstructions of both deformable and non-deformable objects in complex interactions, with improved training efficiency compared to existing methods. The code and models will be available on our github page.
TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene
[ "Sandika Biswas", "Qianyi Wu", "Biplab Banerjee", "Hamid Rezatofighi" ]
NeurIPS.cc/2024/Conference
2409.17459
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UO7Mvch1Z5
@inproceedings{ wu2024uniqued, title={Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image}, author={Kailu Wu and Fangfu Liu and Zhihan Cai and Runjie Yan and Hanyang Wang and Yating Hu and Yueqi Duan and Kaisheng Ma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UO7Mvch1Z5} }
In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Previous methods based on Score Distillation Sampling (SDS) can produce diversified 3D results by distilling 3D knowledge from large 2D diffusion models, but they usually suffer from long per-case optimization time with inconsistent issues. Recent works address the problem and generate better 3D results either by finetuning a multi-view diffusion model or training a fast feed-forward model. However, they still lack intricate textures and complex geometries due to inconsistency and limited generated resolution. To simultaneously achieve high fidelity, consistency, and efficiency in single image-to-3D, we propose a novel framework Unique3D that includes a multi-view diffusion model with a corresponding normal diffusion model to generate multi-view images with their normal maps, a multi-level upscale process to progressively improve the resolution of generated orthographic multi-views, as well as an instant and consistent mesh reconstruction algorithm called ISOMER, which fully integrates the color and geometric priors into mesh results. Extensive experiments demonstrate that our Unique3D significantly outperforms other image-to-3D baselines in terms of geometric and textural details.
Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image
[ "Kailu Wu", "Fangfu Liu", "Zhihan Cai", "Runjie Yan", "Hanyang Wang", "Yating Hu", "Yueqi Duan", "Kaisheng Ma" ]
NeurIPS.cc/2024/Conference
2405.20343
[ "https://github.com/AiuniAI/Unique3D" ]
https://huggingface.co/papers/2405.20343
1
2
0
8
[ "Luffuly/unique3d-mvimage-diffuser", "Luffuly/unique3d-normal-diffuser" ]
[]
[ "Wuvin/Unique3D", "neil-ni/Unique3D", "cavargas10/Unico3D", "abreza/Unique3D", "Gyufyjk/Unique3D", "CrazyEric/Unique3D", "meangkim/Unique3D", "tabulasd/Unique3D", "charbel-malo/3D-Genesis" ]
[ "Luffuly/unique3d-mvimage-diffuser", "Luffuly/unique3d-normal-diffuser" ]
[]
[ "Wuvin/Unique3D", "neil-ni/Unique3D", "cavargas10/Unico3D", "abreza/Unique3D", "Gyufyjk/Unique3D", "CrazyEric/Unique3D", "meangkim/Unique3D", "tabulasd/Unique3D", "charbel-malo/3D-Genesis" ]
1
poster
null
https://openreview.net/forum?id=UN7nXLeh9D
@inproceedings{ potfer2024improved, title={Improved learning rates in multi-unit uniform price auctions}, author={Marius Potfer and Dorian Baudry and Hugo Richard and Vianney Perchet and Cheng Wan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UN7nXLeh9D} }
Motivated by the strategic participation of electricity producers in electricity day-ahead market, we study the problem of online learning in repeated multi-unit uniform price auctions focusing on the adversarial opposing bid setting. The main contribution of this paper is the introduction of a new modeling of the bid space. Indeed, we prove that a learning algorithm leveraging the structure of this problem achieves a regret of $\tilde{O}(K^{4/3}T^{2/3})$ under bandit feedback, improving over the bound of $\tilde{O}(K^{7/4}T^{3/4})$ previously obtained in the literature. This improved regret rate is tight up to logarithmic terms. %by deducing a lower bound of $\Omega (T^{2/3})$ from the dynamic pricing literature, proving the optimality in $T$ of our algorithm up to log factors. Inspired by electricity reserve markets, we further introduce a different feedback model under which all winning bids are revealed. This feedback interpolates between the full-information and bandit scenarios depending on the auctions' results. We prove that, under this feedback, the algorithm that we propose achieves regret $\tilde{O}(K^{5/2}\sqrt{T})$.
Improved learning rates in multi-unit uniform price auctions
[ "Marius Potfer", "Dorian Baudry", "Hugo Richard", "Vianney Perchet", "Cheng Wan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UMPedMhKWm
@inproceedings{ wu2024rapid, title={Rapid Plug-in Defenders}, author={Kai Wu and Yujian Betterest Li and Jian Lou and Xiaoyu Zhang and Handing Wang and Jing Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UMPedMhKWm} }
In the realm of daily services, the deployment of deep neural networks underscores the paramount importance of their reliability. However, the vulnerability of these networks to adversarial attacks, primarily evasion-based, poses a concerning threat to their functionality. Common methods for enhancing robustness involve heavy adversarial training or leveraging learned knowledge from clean data, both necessitating substantial computational resources. This inherent time-intensive nature severely limits the agility of large foundational models to swiftly counter adversarial perturbations. To address this challenge, this paper focuses on the \textbf{Ra}pid \textbf{P}lug-\textbf{i}n \textbf{D}efender (\textbf{RaPiD}) problem, aiming to rapidly counter adversarial perturbations without altering the deployed model. Drawing inspiration from the generalization and the universal computation ability of pre-trained transformer models, we propose a novel method termed \textbf{CeTaD} (\textbf{C}onsidering Pr\textbf{e}-trained \textbf{T}ransformers \textbf{a}s \textbf{D}efenders) for RaPiD, optimized for efficient computation. \textbf{CeTaD} strategically fine-tunes the normalization layer parameters within the defender using a limited set of clean and adversarial examples. Our evaluation centers on assessing \textbf{CeTaD}'s effectiveness, transferability, and the impact of different components in scenarios involving one-shot adversarial examples. The proposed method is capable of rapidly adapting to various attacks and different application scenarios without altering the target model and clean training data. We also explore the influence of varying training data conditions on \textbf{CeTaD}'s performance. Notably, \textbf{CeTaD} exhibits adaptability across differentiable service models and proves the potential of continuous learning.
Rapid Plug-in Defenders
[ "Kai Wu", "Yujian Betterest Li", "Jian Lou", "Xiaoyu Zhang", "Handing Wang", "Jing Liu" ]
NeurIPS.cc/2024/Conference
2306.01762
[ "" ]
https://huggingface.co/papers/2306.01762
1
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=UJ9k3j93MD
@inproceedings{ wu2024separation, title={Separation and Bias of Deep Equilibrium Models on Expressivity and Learning Dynamics}, author={Zhoutong Wu and Yimu Zhang and Cong Fang and Zhouchen Lin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UJ9k3j93MD} }
The deep equilibrium model (DEQ) generalizes the conventional feedforward neural network by fixing the same weights for each layer block and extending the number of layers to infinity. This novel model directly finds the fixed points of such a forward process as features for prediction. Despite empirical evidence showcasing its efficacy compared to feedforward neural networks, a theoretical understanding for its separation and bias is still limited. In this paper, we take a step by proposing some separations and studying the bias of DEQ in its expressive power and learning dynamics. The results include: (1) A general separation is proposed, showing the existence of a width-$m$ DEQ that any fully connected neural networks (FNNs) with depth $O(m^{\alpha})$ for $\alpha \in (0,1)$ cannot approximate unless its width is sub-exponential in $m$; (2) DEQ with polynomially bounded size and magnitude can efficiently approximate certain steep functions (which has very large derivatives) in $L^{\infty}$ norm, whereas FNN with bounded depth and exponentially bounded width cannot unless its weights magnitudes are exponentially large; (3) The implicit regularization caused by gradient flow from a diagonal linear DEQ is characterized, with specific examples showing the benefits brought by such regularization. From the overall study, a high-level conjecture from our analysis and empirical validations is that DEQ has potential advantages in learning certain high-frequency components.
Separation and Bias of Deep Equilibrium Models on Expressivity and Learning Dynamics
[ "Zhoutong Wu", "Yimu Zhang", "Cong Fang", "Zhouchen Lin" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UIOjGTKHQG
@inproceedings{ jiang2024dllm, title={D-{LLM}: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models}, author={yikun jiang and Huanyu Wang and Lei Xie and Hanbin Zhao and Chao Zhang and Hui Qian and John C.S. Lui}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UIOjGTKHQG} }
Large language models have shown an impressive societal impact owing to their excellent understanding and logical reasoning skills. However, such strong ability relies on a huge amount of computing resources, which makes it difficult to deploy LLMs on computing resource-constrained platforms. Currently, LLMs process each token equivalently, but we argue that not every word is equally important. Some words should not be allocated excessive computing resources, particularly for dispensable terms in simple questions. In this paper, we propose a novel dynamic inference paradigm for LLMs, namely D-LLMs, which adaptively allocate computing resources in token processing. We design a dynamic decision module for each transformer layer that decides whether a network unit should be executed or skipped. Moreover, we tackle the issue of adapting D-LLMs to real-world applications, specifically concerning the missing KV-cache when layers are skipped. To overcome this, we propose a simple yet effective eviction policy to exclude the skipped layers from subsequent attention calculations. The eviction policy not only enables D-LLMs to be compatible with prevalent applications but also reduces considerable storage resources. Experimentally, D-LLMs show superior performance, in terms of computational cost and KV storage utilization. It can reduce up to 45\% computational cost and KV storage on Q\&A, summarization, and math solving tasks, 50\% on commonsense reasoning tasks.
D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models
[ "yikun jiang", "Huanyu Wang", "Lei Xie", "Hanbin Zhao", "Chao Zhang", "Hui Qian", "John C.S. Lui" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UHDCbIrCFL
@inproceedings{ liu2024exocentrictoegocentric, title={Exocentric-to-Egocentric Video Generation}, author={Jia-Wei Liu and Weijia Mao and Zhongcong Xu and Jussi Keppo and Mike Zheng Shou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UHDCbIrCFL} }
We introduce Exo2Ego-V, a novel exocentric-to-egocentric diffusion-based video generation method for daily-life skilled human activities where sparse 4-view exocentric viewpoints are configured 360° around the scene. This task is particularly challenging due to the significant variations between exocentric and egocentric viewpoints and high complexity of dynamic motions and real-world daily-life environments. To address these challenges, we first propose a new diffusion-based multi-view exocentric encoder to extract the dense multi-scale features from multi-view exocentric videos as the appearance conditions for egocentric video generation. Then, we design an exocentric-to-egocentric view translation prior to provide spatially aligned egocentric features as a concatenation guidance for the input of egocentric video diffusion model. Finally, we introduce the temporal attention layers into our egocentric video diffusion pipeline to improve the temporal consistency cross egocentric frames. Extensive experiments demonstrate that Exo2Ego-V significantly outperforms SOTA approaches on 5 categories from the Ego-Exo4D dataset with an average of 35% in terms of LPIPS. Our code and model will be made available on https://github.com/showlab/Exo2Ego-V.
Exocentric-to-Egocentric Video Generation
[ "Jia-Wei Liu", "Weijia Mao", "Zhongcong Xu", "Jussi Keppo", "Mike Zheng Shou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UGlDVc0GTU
@inproceedings{ kim2024llmbased, title={{LLM}-based Skill Diffusion for Zero-shot Policy Adaptation}, author={Woo Kyung Kim and Youngseok Lee and Jooyoung Kim and Honguk Woo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UGlDVc0GTU} }
Recent advances in data-driven imitation learning and offline reinforcement learning have highlighted the use of expert data for skill acquisition and the development of hierarchical policies based on these skills. However, these approaches have not significantly advanced in adapting these skills to unseen contexts, which may involve changing environmental conditions or different user requirements. In this paper, we present a novel LLM-based policy adaptation framework LDuS which leverages an LLM to guide the generation process of a skill diffusion model upon contexts specified in language, facilitating zero-shot skill-based policy adaptation to different contexts. To implement the skill diffusion model, we adapt the loss-guided diffusion with a sequential in-painting technique, where target trajectories are conditioned by masking them with past state-action sequences, thereby enabling the robust and controlled generation of skill trajectories in test-time. To have a loss function for a given context, we employ the LLM-based code generation with iterative refinement, by which the code and controlled trajectory are validated to align with the context in a closed-loop manner. Through experiments, we demonstrate the zero-shot adaptability of LDuS to various context types including different specification levels, multi-modality, and varied temporal conditions for several robotic manipulation tasks, outperforming other language-conditioned imitation and planning methods.
LLM-based Skill Diffusion for Zero-shot Policy Adaptation
[ "Woo Kyung Kim", "Youngseok Lee", "Jooyoung Kim", "Honguk Woo" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UGUkPYSdg4
@inproceedings{ haoweiz2024distributionaware, title={Distribution-Aware Data Expansion with Diffusion Models}, author={haoweiz and Ling Yang and Jun-Hai Yong and Hongzhi Yin and Jiawei Jiang and Meng Xiao and Wentao Zhang and Bin Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UGUkPYSdg4} }
The scale and quality of a dataset significantly impact the performance of deep models. However, acquiring large-scale annotated datasets is both a costly and time-consuming endeavor. To address this challenge, dataset expansion technologies aim to automatically augment datasets, unlocking the full potential of deep models. Current data expansion techniques include image transformation and image synthesis methods. Transformation-based methods introduce only local variations, leading to limited diversity. In contrast, synthesis-based methods generate entirely new content, greatly enhancing informativeness. However, existing synthesis methods carry the risk of distribution deviations, potentially degrading model performance with out-of-distribution samples. In this paper, we propose DistDiff, a training-free data expansion framework based on the distribution-aware diffusion model. DistDiff constructs hierarchical prototypes to approximate the real data distribution, optimizing latent data points within diffusion models with hierarchical energy guidance. We demonstrate its capability to generate distribution-consistent samples, significantly improving data expansion tasks. DistDiff consistently enhances accuracy across a diverse range of datasets compared to models trained solely on original data. Furthermore, our approach consistently outperforms existing synthesis-based techniques and demonstrates compatibility with widely adopted transformation-based augmentation methods. Additionally, the expanded dataset exhibits robustness across various architectural frameworks.
Distribution-Aware Data Expansion with Diffusion Models
[ "haoweiz", "Ling Yang", "Jun-Hai Yong", "Hongzhi Yin", "Jiawei Jiang", "Meng Xiao", "Wentao Zhang", "Bin Wang" ]
NeurIPS.cc/2024/Conference
2403.06741
[ "https://github.com/haoweiz23/distdiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UFRZHFYW8e
@inproceedings{ varma2024ravl, title={Ra{VL}: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models}, author={Maya Varma and Jean-Benoit Delbrouck and Zhihong Chen and Akshay S Chaudhari and Curtis Langlotz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UFRZHFYW8e} }
Fine-tuned vision-language models (VLMs) often capture spurious correlations between image features and textual attributes, resulting in degraded zero-shot performance at test time. Existing approaches for addressing spurious correlations (i) primarily operate at the global image-level rather than intervening directly on fine-grained image features and (ii) are predominantly designed for unimodal settings. In this work, we present RaVL, which takes a fine-grained perspective on VLM robustness by discovering and mitigating spurious correlations using local image features rather than operating at the global image level. Given a fine-tuned VLM, RaVL first discovers spurious correlations by leveraging a region-level clustering approach to identify precise image features contributing to zero-shot classification errors. Then, RaVL mitigates the identified spurious correlation with a novel region-aware loss function that enables the VLM to focus on relevant regions and ignore spurious relationships during fine-tuning. We evaluate RaVL on 654 VLMs with various model architectures, data domains, and learned spurious correlations. Our results show that RaVL accurately discovers (191% improvement over the closest baseline) and mitigates (8.2% improvement on worst-group image classification accuracy) spurious correlations. Qualitative evaluations on general-domain and medical-domain VLMs confirm our findings.
RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models
[ "Maya Varma", "Jean-Benoit Delbrouck", "Zhihong Chen", "Akshay S Chaudhari", "Curtis Langlotz" ]
NeurIPS.cc/2024/Conference
2411.04097
[ "https://github.com/stanford-aimi/ravl" ]
https://huggingface.co/papers/2411.04097
4
5
2
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=UE6CeRMnq3
@inproceedings{ yang2024frequencyaware, title={Frequency-aware Generative Models for Multivariate Time Series Imputation}, author={Xinyu Yang and Yu Sun and Xiaojie Yuan and Xinyang Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UE6CeRMnq3} }
Missing data in multivariate time series are common issues that can affect the analysis and downstream applications. Although multivariate time series data generally consist of the trend, seasonal and residual terms, existing works mainly focus on optimizing the modeling for the first two items. However, we find that the residual term is more crucial for getting accurate fillings, since it is more related to the diverse changes of data and the biggest component of imputation errors. Therefore, in this study, we introduce frequency-domain information and design Frequency-aware Generative Models for Multivariate Time Series Imputation (FGTI). Specifically, FGTI employs a high-frequency filter to boost the residual term imputation, supplemented by a dominant-frequency filter for the trend and seasonal imputation. Cross-domain representation learning module then fuses frequency-domain insights with deep representations. Experiments over various datasets with real-world missing values show that FGTI achieves superiority in both data imputation and downstream applications.
Frequency-aware Generative Models for Multivariate Time Series Imputation
[ "Xinyu Yang", "Yu Sun", "Xiaojie Yuan", "Xinyang Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UDi51I8K1p
@inproceedings{ cubillos2024exploring, title={Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces}, author={Luis Hernan Cubillos and Guy Revach and Matthew Mender and Joseph T Costello and Hisham Temmar and Aren Hite and Diksha Anoop Kumar Zutshi and Dylan Michael Wallace and Xiaoyong Ni and Madison M. Kelberman and Matt Willsey and Ruud Van Sloun and Nir Shlezinger and Parag Ganapati Patil and Anne Draelos and Cynthia Chestek}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UDi51I8K1p} }
People with brain or spinal cord-related paralysis often need to rely on others for basic tasks, limiting their independence. A potential solution is brain-machine interfaces (BMIs), which could allow them to voluntarily control external devices (e.g., robotic arm) by decoding brain activity to movement commands. In the past decade, deep-learning decoders have achieved state-of-the-art results in most BMI applications, ranging from speech production to finger control. However, the 'black-box' nature of deep-learning decoders could lead to unexpected behaviors, resulting in major safety concerns in real-world physical control scenarios. In these applications, explainable but lower-performing decoders, such as the Kalman filter (KF), remain the norm. In this study, we designed a BMI decoder based on KalmanNet, an extension of the KF that augments its operation with recurrent neural networks to compute the Kalman gain. This results in a varying “trust” that shifts between inputs and dynamics. We used this algorithm to predict finger movements from the brain activity of two monkeys. We compared KalmanNet results offline (pre-recorded data, $n=13$ days) and online (real-time predictions, $n=5$ days) with a simple KF and two recent deep-learning algorithms: tcFNN (non-ReFIT version) and LSTM. KalmanNet achieved comparable or better results than other deep learning models in offline and online modes, relying on the dynamical model for stopping while depending more on neural inputs for initiating movements. We further validated this mechanism by implementing a heteroscedastic KF that used the same strategy, and it also approached state-of-the-art performance while remaining in the explainable domain of standard KFs. However, we also see two downsides to KalmanNet. KalmanNet shares the limited generalization ability of existing deep-learning decoders, and its usage of the KF as an inductive bias limits its performance in the presence of unseen noise distributions. Despite this trade-off, our analysis successfully integrates traditional controls and modern deep-learning approaches to motivate high-performing yet still explainable BMI designs.
Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces
[ "Luis Hernan Cubillos", "Guy Revach", "Matthew Mender", "Joseph T Costello", "Hisham Temmar", "Aren Hite", "Diksha Anoop Kumar Zutshi", "Dylan Michael Wallace", "Xiaoyong Ni", "Madison M. Kelberman", "Matt Willsey", "Ruud Van Sloun", "Nir Shlezinger", "Parag Ganapati Patil", "Anne Draelos", "Cynthia Chestek" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UCSt4gk6iX
@inproceedings{ kheradmand2024d, title={3D Gaussian Splatting as Markov Chain Monte Carlo}, author={Shakiba Kheradmand and Daniel Rebain and Gopal Sharma and Weiwei Sun and Yang-Che Tseng and Hossam Isack and Abhishek Kar and Andrea Tagliasacchi and Kwang Moo Yi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UCSt4gk6iX} }
While 3D Gaussian Splatting has recently become popular for neural rendering, current methods rely on carefully engineered cloning and splitting strategies for placing Gaussians, which does not always generalize and may lead to poor-quality renderings. For many real-world scenes this leads to their heavy dependence on good initializations. In this work, we rethink the set of 3D Gaussians as a random sample drawn from an underlying probability distribution describing the physical representation of the scene—in other words, Markov Chain Monte Carlo (MCMC) samples. Under this view, we show that the 3D Gaussian updates can be converted as Stochastic Gradient Langevin Dynamics (SGLD) update by simply introducing noise. We then rewrite the densification and pruning strategies in 3D Gaussian Splatting as simply a deterministic state transition of MCMC samples, removing these heuristics from the framework. To do so, we revise the ‘cloning’ of Gaussians into a relocalization scheme that approximately preserves sample probability. To encourage efficient use of Gaussians, we introduce an L1-regularizer on the Gaussians. On various standard evaluation scenes, we show that our method provides improved rendering quality, easy control over the number of Gaussians, and robustness to initialization. The project website is available at https://3dgs-mcmc.github.io/.
3D Gaussian Splatting as Markov Chain Monte Carlo
[ "Shakiba Kheradmand", "Daniel Rebain", "Gopal Sharma", "Weiwei Sun", "Yang-Che Tseng", "Hossam Isack", "Abhishek Kar", "Andrea Tagliasacchi", "Kwang Moo Yi" ]
NeurIPS.cc/2024/Conference
2404.09591
[ "" ]
https://huggingface.co/papers/2404.09591
0
0
0
9
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=UBpPOqrBKE
@inproceedings{ yang2024federated, title={Federated Graph Learning for Cross-Domain Recommendation}, author={Ziqi Yang and Zhaopeng Peng and Zihui Wang and Jianzhong Qi and Chaochao Chen and Weike Pan and Chenglu Wen and Cheng Wang and Xiaoliang Fan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UBpPOqrBKE} }
Cross-domain recommendation (CDR) offers a promising solution to the data sparsity problem by enabling knowledge transfer across source and target domains. However, many recent CDR models overlook crucial issues such as privacy as well as the risk of negative transfer (which negatively impact model performance), especially in multi-domain settings. To address these challenges, we propose FedGCDR, a novel federated graph learning framework that securely and effectively leverages positive knowledge from multiple source domains. First, we design a positive knowledge transfer module that ensures privacy during inter-domain knowledge transmission. This module employs differential privacy-based knowledge extraction combined with a feature mapping mechanism, transforming source domain embeddings from federated graph attention networks into reliable domain knowledge. Second, we design a knowledge activation module to filter out potential harmful or conflicting knowledge from source domains, addressing the issues of negative transfer. This module enhances target domain training by expanding the graph of the target domain to generate reliable domain attentions and fine-tunes the target model for improved negative knowledge filtering and more accurate predictions. We conduct extensive experiments on 16 popular domains of the Amazon dataset, demonstrating that FedGCDR significantly outperforms state-of-the-art methods.
Federated Graph Learning for Cross-Domain Recommendation
[ "Ziqi Yang", "Zhaopeng Peng", "Zihui Wang", "Jianzhong Qi", "Chaochao Chen", "Weike Pan", "Chenglu Wen", "Cheng Wang", "Xiaoliang Fan" ]
NeurIPS.cc/2024/Conference
2410.08249
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UARTFgkTqW
@inproceedings{ zhang2024magr, title={MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization}, author={Aozhong Zhang and Naigang Wang and Yanxia Deng and Xin Li and Zi Yang and Penghang Yin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=UARTFgkTqW} }
In this paper, we present a simple optimization-based preprocessing technique called Weight Magnitude Reduction (MagR) to improve the performance of post-training quantization. For each linear layer, we adjust the pre-trained floating-point weights by solving an $\ell_\infty$-regularized optimization problem. This process greatly diminishes the maximum magnitude of the weights and smooths out outliers, while preserving the layer's output. The preprocessed weights are centered more towards zero, which facilitates the subsequent quantization process. To implement MagR, we address the $\ell_\infty$-regularization by employing an efficient proximal gradient descent algorithm. Unlike existing preprocessing methods that involve linear transformations and subsequent post-processing steps, which can introduce significant overhead at inference time, MagR functions as a non-linear transformation, eliminating the need for any additional post-processing. This ensures that MagR introduces no overhead whatsoever during inference. Our experiments demonstrate that MagR achieves state-of-the-art performance on the Llama family of models. For example, we achieve a Wikitext2 perplexity of 6.7 on the LLaMA2-70B model for per-channel INT2 weight quantization without incurring any inference overhead.
MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization
[ "Aozhong Zhang", "Naigang Wang", "Yanxia Deng", "Xin Li", "Zi Yang", "Penghang Yin" ]
NeurIPS.cc/2024/Conference
2406.00800
[ "https://github.com/aozhongzhang/magr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=U9e1d2xOc8
@inproceedings{ meunier2024optimal, title={Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms}, author={Dimitri Meunier and Zikai Shen and Mattes Mollenhauer and Arthur Gretton and Zhu Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U9e1d2xOc8} }
We study theoretical properties of a broad class of regularized algorithms with vector-valued output. These spectral algorithms include kernel ridge regression, kernel principal component regression and various implementations of gradient descent. Our contributions are twofold. First, we rigorously confirm the so-called saturation effect for ridge regression with vector-valued output by deriving a novel lower bound on learning rates; this bound is shown to be suboptimal when the smoothness of the regression function exceeds a certain level. Second, we present an upper bound on the finite sample risk for general vector-valued spectral algorithms, applicable to both well-specified and misspecified scenarios (where the true regression function lies outside of the hypothesis space), and show that this bound is minimax optimal in various regimes. All of our results explicitly allow the case of infinite-dimensional output variables, proving consistency of recent practical applications.
Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms
[ "Dimitri Meunier", "Zikai Shen", "Mattes Mollenhauer", "Arthur Gretton", "Zhu Li" ]
NeurIPS.cc/2024/Conference
2405.14778
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=U9MzoDOKZu
@inproceedings{ wang2024metadt, title={Meta-{DT}: Offline Meta-{RL} as Conditional Sequence Modeling with World Model Disentanglement}, author={Zhi Wang and Li Zhang and Wenhao Wu and Yuanheng Zhu and Dongbin Zhao and Chunlin Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U9MzoDOKZu} }
A longstanding goal of artificial general intelligence is highly capable generalists that can learn from diverse experiences and generalize to unseen tasks. The language and vision communities have seen remarkable progress toward this trend by scaling up transformer-based models trained on massive datasets, while reinforcement learning (RL) agents still suffer from poor generalization capacity under such paradigms. To tackle this challenge, we propose Meta Decision Transformer (Meta-DT), which leverages the sequential modeling ability of the transformer architecture and robust task representation learning via world model disentanglement to achieve efficient generalization in offline meta-RL. We pretrain a context-aware world model to learn a compact task representation, and inject it as a contextual condition to the causal transformer to guide task-oriented sequence generation. Then, we subtly utilize history trajectories generated by the meta-policy as a self-guided prompt to exploit the architectural inductive bias. We select the trajectory segment that yields the largest prediction error on the pretrained world model to construct the prompt, aiming to encode task-specific information complementary to the world model maximally. Notably, the proposed framework eliminates the requirement of any expert demonstration or domain knowledge at test time. Experimental results on MuJoCo and Meta-World benchmarks across various dataset types show that Meta-DT exhibits superior few and zero-shot generalization capacity compared to strong baselines while being more practical with fewer prerequisites. Our code is available at https://github.com/NJU-RL/Meta-DT.
Meta-DT: Offline Meta-RL as Conditional Sequence Modeling with World Model Disentanglement
[ "Zhi Wang", "Li Zhang", "Wenhao Wu", "Yuanheng Zhu", "Dongbin Zhao", "Chunlin Chen" ]
NeurIPS.cc/2024/Conference
2410.11448
[ "https://github.com/nju-rl/meta-dt" ]
https://huggingface.co/papers/2410.11448
0
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=U6oQEzSp8z
@inproceedings{ malard2024an, title={An eye for an ear: zero-shot audio description leveraging an image captioner with audio-visual token distribution matching}, author={Hugo Malard and Michel Olvera and St{\'e}phane Lathuili{\`e}re and Slim Essid}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U6oQEzSp8z} }
Multimodal large language models have fueled progress in image captioning. These models, fine-tuned on vast image datasets, exhibit a deep understanding of semantic concepts. In this work, we show that this ability can be re-purposed for audio captioning, where the joint image-language decoder can be leveraged to describe auditory content associated with image sequences within videos featuring audiovisual content. This can be achieved via multimodal alignment. Yet, this multimodal alignment task is non-trivial due to the inherent disparity between audible and visible elements in real-world videos. Moreover, multimodal representation learning often relies on contrastive learning, facing the challenge of the so-called modality gap which hinders smooth integration between modalities. In this work, we introduce a novel methodology for bridging the audiovisual modality gap by matching the distributions of tokens produced by an audio backbone and those of an image captioner. Our approach aligns the audio token distribution with that of the image tokens, enabling the model to perform zero-shot audio captioning in an unsupervised fashion. This alignment allows for the use of either audio or audiovisual input by combining or substituting the image encoder with the aligned audio encoder. Our method achieves significantly improved performances in zero-shot audio captioning, compared to existing approaches.
An eye for an ear: zero-shot audio description leveraging an image captioner with audio-visual token distribution matching
[ "Hugo Malard", "Michel Olvera", "Stéphane Lathuilière", "Slim Essid" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=U4WeoyRHPd
@inproceedings{ pan2024mambasci, title={Mamba{SCI}: Efficient Mamba-{UN}et for Quad-Bayer Patterned Video Snapshot Compressive Imaging}, author={Zhenghao Pan and Haijin Zeng and Jiezhang Cao and Yongyong Chen and Kai Zhang and Yong Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U4WeoyRHPd} }
Color video snapshot compressive imaging (SCI) employs computational imaging techniques to capture multiple sequential video frames in a single Bayer-patterned measurement. With the increasing popularity of quad-Bayer pattern in mainstream smartphone cameras for capturing high-resolution videos, mobile photography has become more accessible to a wider audience. However, existing color video SCI reconstruction algorithms are designed based on the traditional Bayer pattern. When applied to videos captured by quad-Bayer cameras, these algorithms often result in color distortion and ineffective demosaicing, rendering them impractical for primary equipment. To address this challenge, we propose the MambaSCI method, which leverages the Mamba and UNet architectures for efficient reconstruction of quad-Bayer patterned color video SCI. To the best of our knowledge, our work presents the first algorithm for quad-Bayer patterned SCI reconstruction, and also the initial application of the Mamba model to this task. Specifically, we customize Residual-Mamba-Blocks, which residually connect the Spatial-Temporal Mamba (STMamba), Edge-Detail-Reconstruction (EDR) module, and Channel Attention (CA) module. Respectively, STMamba is used to model long-range spatial-temporal dependencies with linear complexity, EDR is for better edge-detail reconstruction, and CA is used to compensate for the missing channel information interaction in Mamba model. Experiments demonstrate that MambaSCI surpasses state-of-the-art methods with lower computational and memory costs. PyTorch style pseudo-code for the core modules is provided in the supplementary materials. Code is at https://github.com/PAN083/MambaSCI.
MambaSCI: Efficient Mamba-UNet for Quad-Bayer Patterned Video Snapshot Compressive Imaging
[ "Zhenghao Pan", "Haijin Zeng", "Jiezhang Cao", "Yongyong Chen", "Kai Zhang", "Yong Xu" ]
NeurIPS.cc/2024/Conference
2410.14214
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=U4KldRgoph
@inproceedings{ luo2024enhancing, title={Enhancing Graph Transformers with Hierarchical Distance Structural Encoding}, author={Yuankai Luo and Hongkang Li and Lei Shi and Xiao-Ming Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U4KldRgoph} }
Graph transformers need strong inductive biases to derive meaningful attention scores. Yet, current methods often fall short in capturing longer ranges, hierarchical structures, or community structures, which are common in various graphs such as molecules, social networks, and citation networks. This paper presents a Hierarchical Distance Structural Encoding (HDSE) method to model node distances in a graph, focusing on its multi-level, hierarchical nature. We introduce a novel framework to seamlessly integrate HDSE into the attention mechanism of existing graph transformers, allowing for simultaneous application with other positional encodings. To apply graph transformers with HDSE to large-scale graphs, we further propose a high-level HDSE that effectively biases the linear transformers towards graph hierarchies. We theoretically prove the superiority of HDSE in terms of expressivity and generalization. Empirically, we demonstrate that graph transformers with HDSE excel in graph classification, regression on 7 graph-level datasets, and node classification on 11 large-scale graphs.
Enhancing Graph Transformers with Hierarchical Distance Structural Encoding
[ "Yuankai Luo", "Hongkang Li", "Lei Shi", "Xiao-Ming Wu" ]
NeurIPS.cc/2024/Conference
2308.11129
[ "https://github.com/luoyk1999/hdse" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=U4BC0GrFAz
@inproceedings{ nastl2024do, title={Do causal predictors generalize better to new domains?}, author={Vivian Yvonne Nastl and Moritz Hardt}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U4BC0GrFAz} }
We study how well machine learning models trained on causal features generalize across domains. We consider 16 prediction tasks on tabular datasets covering applications in health, employment, education, social benefits, and politics. Each dataset comes with multiple domains, allowing us to test how well a model trained in one domain performs in another. For each prediction task, we select features that have a causal influence on the target of prediction. Our goal is to test the hypothesis that models trained on causal features generalize better across domains. Without exception, we find that predictors using all available features, regardless of causality, have better in-domain and out-of-domain accuracy than predictors using causal features. Moreover, even the absolute drop in accuracy from one domain to the other is no better for causal predictors than for models that use all features. In addition, we show that recent causal machine learning methods for domain generalization do not perform better in our evaluation than standard predictors trained on the set of causal features. Likewise, causal discovery algorithms either fail to run or select causal variables that perform no better than our selection. Extensive robustness checks confirm that our findings are stable under variable misclassification.
Do causal predictors generalize better to new domains?
[ "Vivian Yvonne Nastl", "Moritz Hardt" ]
NeurIPS.cc/2024/Conference
2402.09891
[ "https://github.com/socialfoundations/causal-features" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=U3hQoqgQDJ
@inproceedings{ zou2024interfacing, title={Interfacing Foundation Models' Embeddings}, author={Xueyan Zou and Linjie Li and Jianfeng Wang and Jianwei Yang and Mingyu Ding and Junyi Wei and Zhengyuan Yang and Feng Li and Hao Zhang and Shilong Liu and Arul Aravinthan and Yong Jae Lee and Lijuan Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U3hQoqgQDJ} }
Foundation models possess strong capabilities in reasoning and memorizing across modalities. To further unleash the power of foundation models, we present FIND, a generalized interface for aligning foundation models' embeddings with unified image and dataset-level understanding spanning modality and granularity. As shown in Fig.1, a lightweight transformer interface without tuning any foundation model weights is enough for segmentation, grounding, and retrieval in an interleaved manner. The proposed interface has the following favorable attributes: (1) Generalizable. It applies to various tasks spanning retrieval, segmentation, etc., under the same architecture and weights. (2) Interleavable. With the benefit of multi-task multi-modal training, the proposed interface creates an interleaved shared embedding space. (3) Extendable. The proposed interface is adaptive to new tasks, and new models. In light of the interleaved embedding space, we introduce FIND-Bench, which introduces new training and evaluation annotations to the COCO dataset for interleaved segmentation and retrieval. We are the first work aligning foundations models' embeddings for interleave understanding. Meanwhile, our approach achieves state-of-the-art performance on FIND-Bench and competitive performance on standard retrieval and segmentation settings.
Interfacing Foundation Models' Embeddings
[ "Xueyan Zou", "Linjie Li", "Jianfeng Wang", "Jianwei Yang", "Mingyu Ding", "Junyi Wei", "Zhengyuan Yang", "Feng Li", "Hao Zhang", "Shilong Liu", "Arul Aravinthan", "Yong Jae Lee", "Lijuan Wang" ]
NeurIPS.cc/2024/Conference
2312.07532
[ "https://github.com/ux-decoder/find" ]
https://huggingface.co/papers/2312.07532
9
10
0
12
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=U3Rgdb4li9
@inproceedings{ ailer2024targeted, title={Targeted Sequential Indirect Experiment Design}, author={Elisabeth Ailer and Niclas Dern and Jason Hartford and Niki Kilbertus}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U3Rgdb4li9} }
Scientific hypotheses typically concern specific aspects of complex, imperfectly understood or entirely unknown mechanisms, such as the effect of gene expression levels on phenotypes or how microbial communities influence environmental health. Such queries are inherently causal (rather than purely associational), but in many settings, experiments can not be conducted directly on the target variables of interest, but are indirect. Therefore, they perturb the target variable, but do not remove potential confounding factors. If, additionally, the resulting experimental measurements are high-dimensional and the studied mechanisms nonlinear, the query of interest is generally not identified. We develop an adaptive strategy to design indirect experiments that optimally inform a targeted query about the ground truth mechanism in terms of sequentially narrowing the gap between an upper and lower bound on the query. While the general formulation consists of a bi-level optimization procedure, we derive an efficiently estimable analytical kernel-based estimator of the bounds for the causal effect, a query of key interest, and demonstrate the efficacy of our approach in confounded, multivariate, nonlinear synthetic settings.
Targeted Sequential Indirect Experiment Design
[ "Elisabeth Ailer", "Niclas Dern", "Jason Hartford", "Niki Kilbertus" ]
NeurIPS.cc/2024/Conference
2405.19985
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=U2Mx0hSRwA
@inproceedings{ shi2024ordered, title={Ordered Momentum for Asynchronous {SGD}}, author={Chang-Wei Shi and Yi-Rui Yang and Wu-Jun Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=U2Mx0hSRwA} }
Distributed learning is essential for training large-scale deep models. Asynchronous SGD (ASGD) and its variants are commonly used distributed learning methods, particularly in scenarios where the computing capabilities of workers in the cluster are heterogeneous. Momentum has been acknowledged for its benefits in both optimization and generalization in deep model training. However, existing works have found that naively incorporating momentum into ASGD can impede the convergence. In this paper, we propose a novel method called ordered momentum (OrMo) for ASGD. In OrMo, momentum is incorporated into ASGD by organizing the gradients in order based on their iteration indexes. We theoretically prove the convergence of OrMo with both constant and delay-adaptive learning rates for non-convex problems. To the best of our knowledge, this is the first work to establish the convergence analysis of ASGD with momentum without dependence on the maximum delay. Empirical results demonstrate that OrMo can achieve better convergence performance compared with ASGD and other asynchronous methods with momentum.
Ordered Momentum for Asynchronous SGD
[ "Chang-Wei Shi", "Yi-Rui Yang", "Wu-Jun Li" ]
NeurIPS.cc/2024/Conference
2407.19234
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TzzZ5KAEE2
@inproceedings{ chahine2024neural, title={Neural Cover Selection for Image Steganography}, author={Karl Chahine and Hyeji Kim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TzzZ5KAEE2} }
In steganography, selecting an optimal cover image—referred to as cover selection—is pivotal for effective message concealment. Traditional methods have typically employed exhaustive searches to identify images that conform to specific perceptual or complexity metrics. However, the relationship between these metrics and the actual message hiding efficacy of an image is unclear, often yielding less-than-ideal steganographic outcomes. Inspired by recent advancements in generative models, we introduce a novel cover selection framework, which involves optimizing within the latent space of pretrained generative models to identify the most suitable cover images, distinguishing itself from traditional exhaustive search methods. Our method shows significant advantages in message recovery and image quality. We also conduct an information-theoretic analysis of the generated cover images, revealing that message hiding predominantly occurs in low-variance pixels, reflecting the waterfilling algorithm's principles in parallel Gaussian channels.
Neural Cover Selection for Image Steganography
[ "Karl Chahine", "Hyeji Kim" ]
NeurIPS.cc/2024/Conference
2410.18216
[ "https://github.com/karlchahine/neural-cover-selection-for-image-steganography" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TzxSrNJE0T
@inproceedings{ surendran2024nonasymptotic, title={Non-asymptotic Analysis of Biased Adaptive Stochastic Approximation}, author={Sobihan Surendran and Adeline Fermanian and Antoine Godichon-Baggioni and Sylvain Le Corff}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TzxSrNJE0T} }
Stochastic Gradient Descent (SGD) with adaptive steps is widely used to train deep neural networks and generative models. Most theoretical results assume that it is possible to obtain unbiased gradient estimators, which is not the case in several recent deep learning and reinforcement learning applications that use Monte Carlo methods. This paper provides a comprehensive non-asymptotic analysis of SGD with biased gradients and adaptive steps for non-convex smooth functions. Our study incorporates time-dependent bias and emphasizes the importance of controlling the bias of the gradient estimator. In particular, we establish that Adagrad, RMSProp, and AMSGRAD, an exponential moving average variant of Adam, with biased gradients, converge to critical points for smooth non-convex functions at a rate similar to existing results in the literature for the unbiased case. Finally, we provide experimental results using Variational Autoenconders (VAE) and applications to several learning frameworks that illustrate our convergence results and show how the effect of bias can be reduced by appropriate hyperparameter tuning.
Non-asymptotic Analysis of Biased Adaptive Stochastic Approximation
[ "Sobihan Surendran", "Adeline Fermanian", "Antoine Godichon-Baggioni", "Sylvain Le Corff" ]
NeurIPS.cc/2024/Conference
2402.02857
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ty25oVKTqj
@inproceedings{ wang2024unisdf, title={Uni{SDF}: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections}, author={Fangjinhua Wang and Marie-Julie Rakotosaona and Michael Niemeyer and Richard Szeliski and Marc Pollefeys and Federico Tombari}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Ty25oVKTqj} }
Neural 3D scene representations have shown great potential for 3D reconstruction from 2D images. However, reconstructing real-world captures of complex scenes still remains a challenge. Existing generic 3D reconstruction methods often struggle to represent fine geometric details and do not adequately model reflective surfaces of large-scale scenes. Techniques that explicitly focus on reflective surfaces can model complex and detailed reflections by exploiting better reflection parameterizations. However, we observe that these methods are often not robust in real scenarios where non-reflective as well as reflective components are present. In this work, we propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections. We investigate both camera view as well as reflected view-based color parameterization techniques and find that explicitly blending these representations in 3D space enables reconstruction of surfaces that are more geometrically accurate, especially for reflective surfaces. We further combine this representation with a multi-resolution grid backbone that is trained in a coarse-to-fine manner, enabling faster reconstructions than prior methods. Extensive experiments on object-level datasets DTU, Shiny Blender as well as unbounded datasets Mip-NeRF 360 and Ref-NeRF real demonstrate that our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces, leading to the best overall performance. Project page: https://fangjinhuawang.github.io/UniSDF.
UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections
[ "Fangjinhua Wang", "Marie-Julie Rakotosaona", "Michael Niemeyer", "Richard Szeliski", "Marc Pollefeys", "Federico Tombari" ]
NeurIPS.cc/2024/Conference
2312.13285
[ "" ]
https://huggingface.co/papers/2312.13285
2
5
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=TxffvJMnBy
@inproceedings{ sinha2024optimal, title={Optimal Algorithms for Online Convex Optimization with Adversarial Constraints}, author={Abhishek Sinha and Rahul Vaze}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TxffvJMnBy} }
A well-studied generalization of the standard online convex optimization (OCO) framework is constrained online convex optimization (COCO). In COCO, on every round, a convex cost function and a convex constraint function are revealed to the learner after it chooses the action for that round. The objective is to design an online learning policy that simultaneously achieves a small regret while ensuring a small cumulative constraint violation (CCV) against an adaptive adversary interacting over a horizon of length $T$. A long-standing open question in COCO is whether an online policy can simultaneously achieve $O(\sqrt{T})$ regret and $\tilde{O}(\sqrt{T})$ CCV without any restrictive assumptions. For the first time, we answer this in the affirmative and show that a simple first-order policy can simultaneously achieve these bounds. Furthermore, in the case of strongly convex cost and convex constraint functions, the regret guarantee can be improved to $O(\log T)$ while keeping the CCV bound the same as above. We establish these results by effectively combining adaptive OCO policies as a blackbox with Lyapunov optimization - a classic tool from control theory. Surprisingly, the analysis is short and elegant.
Optimal Algorithms for Online Convex Optimization with Adversarial Constraints
[ "Abhishek Sinha", "Rahul Vaze" ]
NeurIPS.cc/2024/Conference
2310.18955
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=TwrnhZfD6a
@inproceedings{ pranger2024test, title={Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning}, author={Stefan Pranger and Hana Chockler and Martin Tappler and Bettina K{\"o}nighofer}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TwrnhZfD6a} }
In many Deep Reinforcement Learning (RL) problems, decisions in a trained policy vary in significance for the expected safety and performance of the policy. Since RL policies are very complex, testing efforts should concentrate on states in which the agent's decisions have the highest impact on the expected outcome. In this paper, we propose a novel model-based method to rigorously compute a ranking of state importance across the entire state space. We then focus our testing efforts on the highest-ranked states. In this paper, we focus on testing for safety. However, the proposed methods can be easily adapted to test for performance. In each iteration, our testing framework computes optimistic and pessimistic safety estimates. These estimates provide lower and upper bounds on the expected outcomes of the policy execution across all modeled states in the state space. Our approach divides the state space into safe and unsafe regions upon convergence, providing clear insights into the policy's weaknesses. Two important properties characterize our approach. (1) Optimal Test-Case Selection: At any time in the testing process, our approach evaluates the policy in the states that are most critical for safety. (2) Guaranteed Safety: Our approach can provide formal verification guarantees over the entire state space by sampling only a fraction of the policy. Any safety properties assured by the pessimistic estimate are formally proven to hold for the policy. We provide a detailed evaluation of our framework on several examples, showing that our method discovers unsafe policy behavior with low testing effort.
Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning
[ "Stefan Pranger", "Hana Chockler", "Martin Tappler", "Bettina Könighofer" ]
NeurIPS.cc/2024/Conference
2411.07700
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Twqa0GFMGX
@inproceedings{ chen2024idiographic, title={Idiographic Personality Gaussian Process for Psychological Assessment}, author={Yehu Chen and Muchen Xi and Joshua J. Jackson and Jacob Montgomery and Roman Garnett}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Twqa0GFMGX} }
We develop a novel measurement framework based on Gaussian process coregionalization model to address a long-lasting debate in psychometrics: whether psychological features like personality share a common structure across the population or vary uniquely for individuals. We propose idiographic personality Gaussian process (IPGP), an intermediate model that accommodates both shared trait structure across individuals and "idiographic" deviations. IPGP leverages the Gaussian process coregionalization model to conceptualize responses of grouped survey batteries but adjusted to non-Gaussian ordinal data, and exploits stochastic variational inference for latent factor estimation. Using both synthetic data and a novel survey, we show that IPGP improves both prediction of actual responses and estimation of intrapersonal response patterns compared to existing benchmarks. In the survey study, IPGP also identifies unique clusters of personality taxonomies, displaying great potential in advancing individualized approaches to psychological diagnosis.
Idiographic Personality Gaussian Process for Psychological Assessment
[ "Yehu Chen", "Muchen Xi", "Joshua J. Jackson", "Jacob Montgomery", "Roman Garnett" ]
NeurIPS.cc/2024/Conference
2407.04970
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TwdX1W3M6S
@inproceedings{ ye2024online, title={Online Iterative Reinforcement Learning from Human Feedback with General Preference Model}, author={Chenlu Ye and Wei Xiong and Yuheng Zhang and Hanze Dong and Nan Jiang and Tong Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TwdX1W3M6S} }
We investigate Reinforcement Learning from Human Feedback (RLHF) in the context of a general preference oracle. In particular, we do not assume the existence of a reward function and an oracle preference signal drawn from the Bradley-Terry model as most of the prior works do. We consider a standard mathematical formulation, the reverse-KL regularized minimax game between two LLMs for RLHF under general preference oracle. The learning objective of this formulation is to find a policy so that it is consistently preferred by the KL-regularized preference oracle over any competing LLMs. We show that this framework is strictly more general than the reward-based one, and propose sample-efficient algorithms for both the offline learning from a pre-collected preference dataset and online learning where we can query the preference oracle along the way of training. Empirical studies verify the effectiveness of the proposed framework.
Online Iterative Reinforcement Learning from Human Feedback with General Preference Model
[ "Chenlu Ye", "Wei Xiong", "Yuheng Zhang", "Hanze Dong", "Nan Jiang", "Tong Zhang" ]
NeurIPS.cc/2024/Conference
2402.07314
[ "https://github.com/weixiongust/rlhf-reward-modeling" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tw9nfNyOMy
@inproceedings{ gao2024vista, title={Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability}, author={Shenyuan Gao and Jiazhi Yang and Li Chen and Kashyap Chitta and Yihang Qiu and Andreas Geiger and Jun Zhang and Hongyang Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tw9nfNyOMy} }
World models can foresee the outcomes of different actions, which is of paramount importance for autonomous driving. Nevertheless, existing driving world models still have limitations in generalization to unseen environments, prediction fidelity of critical details, and action controllability for flexible application. In this paper, we present Vista, a generalizable driving world model with high fidelity and versatile controllability. Based on a systematic diagnosis of existing methods, we introduce several key ingredients to address these limitations. To accurately predict real-world dynamics at high resolution, we propose two novel losses to promote the learning of moving instances and structural information. We also devise an effective latent replacement approach to inject historical frames as priors for coherent long-horizon rollouts. For action controllability, we incorporate a versatile set of controls from high-level intentions (command, goal point) to low-level maneuvers (trajectory, angle, and speed) through an efficient learning strategy. After large-scale training, the capabilities of Vista can seamlessly generalize to different scenarios. Extensive experiments on multiple datasets show that Vista outperforms the most advanced general-purpose video generator in over 70% of comparisons and surpasses the best-performing driving world model by 55% in FID and 27% in FVD. Moreover, for the first time, we utilize the capacity of Vista itself to establish a generalizable reward for real-world action evaluation without accessing the ground truth actions.
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
[ "Shenyuan Gao", "Jiazhi Yang", "Li Chen", "Kashyap Chitta", "Yihang Qiu", "Andreas Geiger", "Jun Zhang", "Hongyang Li" ]
NeurIPS.cc/2024/Conference
2405.17398
[ "https://github.com/opendrivelab/vista" ]
https://huggingface.co/papers/2405.17398
2
1
1
8
[ "OpenDriveLab/Vista", "yanis9351/vivid01" ]
[]
[ "rerun/Vista" ]
[ "OpenDriveLab/Vista", "yanis9351/vivid01" ]
[]
[ "rerun/Vista" ]
1
poster
null
https://openreview.net/forum?id=Tw032H2onS
@inproceedings{ xie2024boosted, title={Boosted Conformal Prediction Intervals}, author={Ran Xie and Rina Foygel Barber and Emmanuel Candes}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tw032H2onS} }
This paper introduces a boosted conformal procedure designed to tailor conformalized prediction intervals toward specific desired properties, such as enhanced conditional coverage or reduced interval length. We employ machine learning techniques, notably gradient boosting, to systematically improve upon a predefined conformity score function. This process is guided by carefully constructed loss functions that measure the deviation of prediction intervals from the targeted properties. The procedure operates post-training, relying solely on model predictions and without modifying the trained model (e.g., the deep network). Systematic experiments demonstrate that starting from conventional conformal methods, our boosted procedure achieves substantial improvements in reducing interval length and decreasing deviation from target conditional coverage.
Boosted Conformal Prediction Intervals
[ "Ran Xie", "Rina Foygel Barber", "Emmanuel Candes" ]
NeurIPS.cc/2024/Conference
2406.07449
[ "https://github.com/ran-xie/boosted-conformal" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TutGINeJzZ
@inproceedings{ zhao2024a, title={A Huber Loss Minimization Approach to Mean Estimation under User-level Differential Privacy}, author={Puning Zhao and Lifeng Lai and Li Shen and Qingming Li and Jiafei Wu and Zhe Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TutGINeJzZ} }
Privacy protection of users' entire contribution of samples is important in distributed systems. The most effective approach is the two-stage scheme, which finds a small interval first and then gets a refined estimate by clipping samples into the interval. However, the clipping operation induces bias, which is serious if the sample distribution is heavy-tailed. Besides, users with large local sample sizes can make the sensitivity much larger, thus the method is not suitable for imbalanced users. Motivated by these challenges, we propose a Huber loss minimization approach to mean estimation under user-level differential privacy. The connecting points of Huber loss can be adaptively adjusted to deal with imbalanced users. Moreover, it avoids the clipping operation, thus significantly reducing the bias compared with the two-stage approach. We provide a theoretical analysis of our approach, which gives the noise strength needed for privacy protection, as well as the bound of mean squared error. The result shows that the new method is much less sensitive to the imbalance of user-wise sample sizes and the tail of sample distributions. Finally, we perform numerical experiments to validate our theoretical analysis.
A Huber Loss Minimization Approach to Mean Estimation under User-level Differential Privacy
[ "Puning Zhao", "Lifeng Lai", "Li Shen", "Qingming Li", "Jiafei Wu", "Zhe Liu" ]
NeurIPS.cc/2024/Conference
2405.13453
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TusuJSbRxm
@inproceedings{ tkachuk2024trajectory, title={Trajectory Data Suffices for Statistically Efficient Learning in Offline {RL} with Linear \$q{\textasciicircum}{\textbackslash}pi\$-Realizability and Concentrability}, author={Volodymyr Tkachuk and Gell{\'e}rt Weisz and Csaba Szepesvari}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TusuJSbRxm} }
We consider offline reinforcement learning (RL) in $H$-horizon Markov decision processes (MDPs) under the linear $q^\pi$-realizability assumption, where the action-value function of every policy is linear with respect to a given $d$-dimensional feature function. The hope in this setting is that learning a good policy will be possible without requiring a sample size that scales with the number of states in the MDP. Foster et al. [2021] have shown this to be impossible even under $\text{\textit{concentrability}}$, a data coverage assumption where a coefficient $C_\text{conc}$ bounds the extent to which the state-action distribution of any policy can veer off the data distribution. However, the data in this previous work was in the form of a sequence of individual transitions. This leaves open the question of whether the negative result mentioned could be overcome if the data was composed of sequences of full trajectories. In this work we answer this question positively by proving that with trajectory data, a dataset of size $\text{poly}(d,H,C_\text{conc})/\epsilon^2$ is sufficient for deriving an $\epsilon$-optimal policy, regardless of the size of the state space. The main tool that makes this result possible is due to Weisz et al. [2023], who demonstrate that linear MDPs can be used to approximate linearly $q^\pi$-realizable MDPs. The connection to trajectory data is that the linear MDP approximation relies on "skipping" over certain states. The associated estimation problems are thus easy when working with trajectory data, while they remain nontrivial when working with individual transitions. The question of computational efficiency under our assumptions remains open.
Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear q^π-Realizability and Concentrability
[ "Volodymyr Tkachuk", "Gellért Weisz", "Csaba Szepesvari" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TuspoNzIdB
@inproceedings{ levy2024mixture, title={Mixture of neural fields for heterogeneous reconstruction in cryo-{EM}}, author={Axel Levy and Rishwanth Raghu and David Shustin and Adele Rui-Yang Peng and Huan Li and Oliver Biggs Clarke and Gordon Wetzstein and Ellen D Zhong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TuspoNzIdB} }
Cryo-electron microscopy (cryo-EM) is an experimental technique for protein structure determination that images an ensemble of macromolecules in near-physiological contexts. While recent advances enable the reconstruction of dynamic conformations of a single biomolecular complex, current methods do not adequately model samples with mixed conformational and compositional heterogeneity. In particular, datasets containing mixtures of multiple proteins require the joint inference of structure, pose, compositional class, and conformational states for 3D reconstruction. Here, we present Hydra, an approach that models both conformational and compositional heterogeneity fully ab initio by parameterizing structures as arising from one of K neural fields. We employ a hybrid optimization strategy and demonstrate the effectiveness of our approach on synthetic datasets composed of mixtures of proteins with large degrees of conformational variability. We additionally demonstrate Hydra on an experimental dataset imaged of a cellular lysate containing a mixture of different protein complexes. Hydra expands the expressivity of heterogeneous reconstruction methods and thus broadens the scope of cryo-EM to increasingly complex samples.
Mixture of neural fields for heterogeneous reconstruction in cryo-EM
[ "Axel Levy", "Rishwanth Raghu", "David Shustin", "Adele Rui-Yang Peng", "Huan Li", "Oliver Biggs Clarke", "Gordon Wetzstein", "Ellen D Zhong" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TuCQdBo4NC
@inproceedings{ xu2024feelsnn, title={{FEEL}-{SNN}: Robust Spiking Neural Networks with Frequency Encoding and Evolutionary Leak Factor}, author={Mengting Xu and De Ma and Huajin Tang and Qian Zheng and Gang Pan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TuCQdBo4NC} }
Currently, researchers think that the inherent robustness of spiking neural networks (SNNs) stems from their biologically plausible spiking neurons, and are dedicated to developing more bio-inspired models to defend attacks. However, most work relies solely on experimental analysis and lacks theoretical support, and the direct-encoding method and fixed membrane potential leak factor they used in spiking neurons are simplified simulations of those in the biological nervous system, which makes it difficult to ensure generalizability across all datasets and networks. Contrarily, the biological nervous system can stay reliable even in a highly complex noise environment, one of the reasons is selective visual attention and non-fixed membrane potential leaks in biological neurons. This biological finding has inspired us to design a highly robust SNN model that closely mimics the biological nervous system. In our study, we first present a unified theoretical framework for SNN robustness constraint, which suggests that improving the encoding method and evolution of the membrane potential leak factor in spiking neurons can improve SNN robustness. Subsequently, we propose a robust SNN (FEEL-SNN) with Frequency Encoding (FE) and Evolutionary Leak factor (EL) to defend against different noises, mimicking the selective visual attention mechanism and non-fixed leak observed in biological systems. Experimental results confirm the efficacy of both our FE, EL, and FEEL methods, either in isolation or in conjunction with established robust enhancement algorithms, for enhancing the robustness of SNNs.
FEEL-SNN: Robust Spiking Neural Networks with Frequency Encoding and Evolutionary Leak Factor
[ "Mengting Xu", "De Ma", "Huajin Tang", "Qian Zheng", "Gang Pan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TtcwVuBZu1
@inproceedings{ xie2024quadmamba, title={QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model}, author={Fei Xie and Weijia Zhang and Zhongdao Wang and Chao Ma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TtcwVuBZu1} }
Recent advancements in State Space Models, notably Mamba, have demonstrated superior performance over the dominant Transformer models, particularly in reducing the computational complexity from quadratic to linear. Yet, difficulties in adapting Mamba from language to vision tasks arise due to the distinct characteristics of visual data, such as the spatial locality and adjacency within images and large variations in information granularity across visual tokens. Existing vision Mamba approaches either flatten tokens into sequences in a raster scan fashion, which breaks the local adjacency of images, or manually partition tokens into windows, which limits their long-range modeling and generalization capabilities. To address these limitations, we present a new vision Mamba model, coined QuadMamba, that effectively captures local dependencies of varying granularities via quadtree-based image partition and scan. Concretely, our lightweight quadtree-based scan module learns to preserve the 2D locality of spatial regions within learned window quadrants. The module estimates the locality score of each token from their features, before adaptively partitioning tokens into window quadrants. An omnidirectional window shifting scheme is also introduced to capture more intact and informative features across different local regions. To make the discretized quadtree partition end-to-end trainable, we further devise a sequence masking strategy based on Gumbel-Softmax and its straight-through gradient estimator. Extensive experiments demonstrate that QuadMamba achieves state-of-the-art performance in various vision tasks, including image classification, object detection, instance segmentation, and semantic segmentation. Our code and models will be released.
QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model
[ "Fei Xie", "Weijia Zhang", "Zhongdao Wang", "Chao Ma" ]
NeurIPS.cc/2024/Conference
2410.06806
[ "https://github.com/vision-sjtu/quadmamba" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tt2xJaxDc4
@inproceedings{ aggarwal2024randomized, title={Randomized Truthful Auctions with Learning Agents}, author={Gagan Aggarwal and Anupam Gupta and Andres Perlroth and Grigoris Velegkas}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tt2xJaxDc4} }
We study a setting where agents use no-regret learning algorithms to participate in repeated auctions. Recently, Kolumbus and Nisan [2022a] showed, rather surprisingly, that when bidders participate in second-price auctions using no-regret bidding algorithms, no matter how large the number of interactions $T$ is, the runner-up bidder may not converge to bidding truthfully. Our first result shows that this holds forall deterministictruthful auctions. We also show that the ratio of the learning rates of different bidders can qualitatively affect the convergence of the bidders. Next, we consider the problem of revenue maximization in this environment. In the setting with fully rational bidders, the seminal result of Myerson [1981] showed that revenue can be maximized by using a second-price auction with reserves. We show that, in stark contrast, in our setting with learning bidders, randomized auctions can have strictly better revenue guarantees than second-price auctions with reserves, when $T$ is large enough. To do this, we provide a black-box transformation from any truthful auction $A$ to an auction $A'$ such that: i) all mean-based no-regret learners that participate in $A'$ converge to bidding truthfully, ii) the distance between the allocation rule and the payment rule between $A, A'$ is negligible. Finally, we study revenue maximization in the non-asymptotic regime. We define a notion of auctioneer regret that compares the revenue generated to the revenue of a second price auction with truthful bids. When the auctioneer has to use the same auction throughout the interaction, we show an (almost) tight regret bound of $\tilde{\Theta}(T^{3/4})$. Then, we consider the case where the auctioneer can use different auctions throughout the interaction, but in a way that is oblivious to the bids. For this setting, we show an (almost) tight bound of $\tilde{\Theta}(\sqrt{T})$.
Randomized Truthful Auctions with Learning Agents
[ "Gagan Aggarwal", "Anupam Gupta", "Andres Perlroth", "Grigoris Velegkas" ]
NeurIPS.cc/2024/Conference
2411.09517
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tsb4dVtCHx
@inproceedings{ xie2024highdimensional, title={High-dimensional (Group) Adversarial Training in Linear Regression}, author={Yiling Xie and Xiaoming Huo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tsb4dVtCHx} }
Adversarial training can achieve robustness against adversarial perturbations and has been widely used in machine-learning models. This paper delivers a non-asymptotic consistency analysis of the adversarial training procedure under $\ell_\infty$-perturbation in high-dimensional linear regression. It will be shown that, under the restricted eigenvalue condition, the associated convergence rate of prediction error can achieve the minimax rate up to a logarithmic factor in the high-dimensional linear regression on the class of sparse parameters. Additionally, the group adversarial training procedure is analyzed. Compared with classic adversarial training, it will be proved that the group adversarial training procedure enjoys a better prediction error upper bound under certain group-sparsity patterns.
High-dimensional (Group) Adversarial Training in Linear Regression
[ "Yiling Xie", "Xiaoming Huo" ]
NeurIPS.cc/2024/Conference
2405.13940
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TrXV4dMDcG
@inproceedings{ dmitriev2024robust, title={Robust Mixture Learning when Outliers Overwhelm Small Groups}, author={Daniil Dmitriev and Rares-Darius Buhai and Stefan Tiegel and Alexander Wolters and Gleb Novikov and Amartya Sanyal and David Steurer and Fanny Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TrXV4dMDcG} }
We study the problem of estimating the means of well-separated mixtures when an adversary may add arbitrary outliers. While strong guarantees are available when the outlier fraction is significantly smaller than the minimum mixing weight, much less is known when outliers may crowd out low-weight clusters – a setting we refer to as list-decodable mixture learning (LD-ML). In this case, adversarial outliers can simulate additional spurious mixture components. Hence, if all means of the mixture must be recovered up to a small error in the output list, the list size needs to be larger than the number of (true) components. We propose an algorithm that obtains order-optimal error guarantees for each mixture mean with a minimal list-size overhead, significantly improving upon list-decodable mean estimation, the only existing method that is applicable for LD-ML. Although improvements are observed even when the mixture is non-separated, our algorithm achieves particularly strong guarantees when the mixture is separated: it can leverage the mixture structure to partially cluster the samples before carefully iterating a base learner for list-decodable mean estimation at different scales.
Robust Mixture Learning when Outliers Overwhelm Small Groups
[ "Daniil Dmitriev", "Rares-Darius Buhai", "Stefan Tiegel", "Alexander Wolters", "Gleb Novikov", "Amartya Sanyal", "David Steurer", "Fanny Yang" ]
NeurIPS.cc/2024/Conference
2407.15792
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TrN5TcWY87
@inproceedings{ chu2024inversionbased, title={Inversion-based Latent Bayesian Optimization}, author={Jaewon Chu and Jinyoung Park and Seunghun Lee and Hyunwoo J. Kim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TrN5TcWY87} }
Latent Bayesian optimization (LBO) approaches have successfully adopted Bayesian optimization over a continuous latent space by employing an encoder-decoder architecture to address the challenge of optimization in a high dimensional or discrete input space. LBO learns a surrogate model to approximate the black-box objective function in the latent space. However, we observed that most LBO methods suffer from the `misalignment problem', which is induced by the reconstruction error of the encoder-decoder architecture. It hinders learning an accurate surrogate model and generating high-quality solutions. In addition, several trust region-based LBO methods select the anchor, the center of the trust region, based solely on the objective function value without considering the trust region's potential to enhance the optimization process. To address these issues, we propose $\textbf{Inv}$ersion-based Latent $\textbf{B}$ayesian $\textbf{O}$ptimization (InvBO), a plug-and-play module for LBO. InvBO consists of two components: an inversion method and a potential-aware trust region anchor selection. The inversion method searches the latent code that completely reconstructs the given target data. The potential-aware trust region anchor selection considers the potential capability of the trust region for better local optimization. Experimental results demonstrate the effectiveness of InvBO on nine real-world benchmarks, such as molecule design and arithmetic expression fitting tasks. Code is available at https://github.com/mlvlab/InvBO.
Inversion-based Latent Bayesian Optimization
[ "Jaewon Chu", "Jinyoung Park", "Seunghun Lee", "Hyunwoo J. Kim" ]
NeurIPS.cc/2024/Conference
2411.05330
[ "https://github.com/mlvlab/invbo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tpx9gcZVBf
@inproceedings{ sastry2024diffaug, title={DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers}, author={Chandramouli Shama Sastry and Sri Harsha Dumpala and Sageev Oore}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tpx9gcZVBf} }
We introduce DiffAug, a simple and efficient diffusion-based augmentation technique to train image classifiers for the crucial yet challenging goal of improved classifier robustness. Applying DiffAug to a given example consists of one forward-diffusion step followed by one reverse-diffusion step. Using both ResNet-50 and Vision Transformer architectures, we comprehensively evaluate classifiers trained with DiffAug and demonstrate the surprising effectiveness of single-step reverse diffusion in improving robustness to covariate shifts, certified adversarial accuracy and out of distribution detection. When we combine DiffAug with other augmentations such as AugMix and DeepAugment we demonstrate further improved robustness. Finally, building on this approach, we also improve classifier-guided diffusion wherein we observe improvements in: (i) classifier-generalization, (ii) gradient quality (i.e., improved perceptual alignment) and (iii) image generation performance. We thus introduce a computationally efficient technique for training with improved robustness that does not require any additional data, and effectively complements existing augmentation approaches.
DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers
[ "Chandramouli Shama Sastry", "Sri Harsha Dumpala", "Sageev Oore" ]
NeurIPS.cc/2024/Conference
2306.09192
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tnl2K6Iz9j
@inproceedings{ ai2024dynamic, title={Dynamic Service Fee Pricing under Strategic Behavior: Actions as Instruments and Phase Transition}, author={Rui Ai and David Simchi-Levi and Feng Zhu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tnl2K6Iz9j} }
We study a dynamic pricing problem for third-party platform service fees under strategic, far-sighted customers. In each time period, the platform sets a service fee based on historical data, observes the resulting transaction quantities, and collects revenue. The platform also monitors equilibrium prices influenced by both demand and supply. The objective is to maximize total revenue over a time horizon $T$. Our problem incorporates three practical challenges: (a) initially, the platform lacks knowledge of the demand side beforehand, necessitating a balance between exploring (learning the demand curve) and exploiting (maximizing revenue) simultaneously; (b) since only equilibrium prices and quantities are observable, traditional Ordinary Least Squares (OLS) estimators would be biased and inconsistent; (c) buyers are rational and strategic, seeking to maximize their consumer surplus and potentially misrepresenting their preferences. To address these challenges, we propose novel algorithmic solutions. Our approach involves: (i) a carefully designed active randomness injection to balance exploration and exploitation effectively; (ii) using non-i.i.d. actions as instrumental variables (IV) to consistently estimate demand; (iii) a low-switching cost design that promotes nearly truthful buyer behavior. We show an expected regret bound of $\tilde{\mathcal{O}} (\sqrt{T}\wedge\sigma_S^{-2})$ and demonstrate its optimality, up to logarithmic factors, with respect to both the time horizon $T$ and the randomness in supply $\sigma_S$. Despite its simplicity, our model offers valuable insights into the use of actions as estimation instruments, the benefits of low-switching pricing policies in mitigating strategic buyer behavior, and the role of supply randomness in facilitating exploration which leads to a phase transition of policy performance.
Dynamic Service Fee Pricing under Strategic Behavior: Actions as Instruments and Phase Transition
[ "Rui Ai", "David Simchi-Levi", "Feng Zhu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tj5wJslj0R
@inproceedings{ nori2024task, title={Task Confusion and Catastrophic Forgetting in Class-Incremental Learning: A Mathematical Framework for Discriminative and Generative Modelings}, author={Milad Khademi Nori and IL MIN KIM}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tj5wJslj0R} }
In class-incremental learning (class-IL), models must classify all previously seen classes at test time without task-IDs, leading to task confusion. Despite being a key challenge, task confusion lacks a theoretical understanding. We present a novel mathematical framework for class-IL and prove the Infeasibility Theorem, showing optimal class-IL is impossible with discriminative modeling due to task confusion. However, we establish the Feasibility Theorem, demonstrating that generative modeling can achieve optimal class-IL by overcoming task confusion. We then assess popular class-IL strategies, including regularization, bias-correction, replay, and generative classifier, using our framework. Our analysis suggests that adopting generative modeling, either for generative replay or direct classification (generative classifier), is essential for optimal class-IL.
Task Confusion and Catastrophic Forgetting in Class-Incremental Learning: A Mathematical Framework for Discriminative and Generative Modelings
[ "Milad Khademi Nori", "IL MIN KIM" ]
NeurIPS.cc/2024/Conference
2410.20768
[ "https://github.com/miladkhademinori/class-incremental-learning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ti3ciyqlS3
@inproceedings{ lu2024improving, title={Improving Temporal Link Prediction via Temporal Walk Matrix Projection}, author={Xiaodong Lu and Leilei Sun and Tongyu Zhu and Weifeng Lv}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Ti3ciyqlS3} }
Temporal link prediction, aiming at predicting future interactions among entities based on historical interactions, is crucial for a series of real-world applications. Although previous methods have demonstrated the importance of relative encodings for effective temporal link prediction, computational efficiency remains a major concern in constructing these encodings. Moreover, existing relative encodings are usually constructed based on structural connectivity, where temporal information is seldom considered. To address the aforementioned issues, we first analyze existing relative encodings and unify them as a function of temporal walk matrices. This unification establishes a connection between relative encodings and temporal walk matrices, providing a more principled way for analyzing and designing relative encodings. Based on this analysis, we propose a new temporal graph neural network called TPNet, which introduces a temporal walk matrix that incorporates the time decay effect to simultaneously consider both temporal and structural information. Moreover, TPNet designs a random feature propagation mechanism with theoretical guarantees to implicitly maintain the temporal walk matrices, which improves the computation and storage efficiency. Experimental results on 13 benchmark datasets verify the effectiveness and efficiency of TPNet, where TPNet outperforms other baselines on most datasets and achieves a maximum speedup of $33.3 \times$ compared to the SOTA baseline.
Improving Temporal Link Prediction via Temporal Walk Matrix Projection
[ "Xiaodong Lu", "Leilei Sun", "Tongyu Zhu", "Weifeng Lv" ]
NeurIPS.cc/2024/Conference
2410.04013
[ "https://github.com/lxd99/tpnet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Thou1rKdpZ
@inproceedings{ zhang2024incontext, title={In-Context Learning of a Linear Transformer Block: Benefits of the {MLP} Component and One-Step {GD} Initialization}, author={Ruiqi Zhang and Jingfeng Wu and Peter Bartlett}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Thou1rKdpZ} }
We study the \emph{in-context learning} (ICL) ability of a \emph{Linear Transformer Block} (LTB) that combines a linear attention component and a linear multi-layer perceptron (MLP) component. For ICL of linear regression with a Gaussian prior and a \emph{non-zero mean}, we show that LTB can achieve nearly Bayes optimal ICL risk. In contrast, using only linear attention must incur an irreducible additive approximation error. Furthermore, we establish a correspondence between LTB and one-step gradient descent estimators with learnable initialization ($\mathsf{GD}-\beta$), in the sense that every $\mathsf{GD}-\beta$ estimator can be implemented by an LTB estimator and every optimal LTB estimator that minimizes the in-class ICL risk is effectively a $\mathsf{GD}-\beta$ estimator. Finally, we show that $\mathsf{GD}-\beta$ estimators can be efficiently optimized with gradient flow, despite a non-convex training objective. Our results reveal that LTB achieves ICL by implementing $\mathsf{GD}-\beta$, and they highlight the role of MLP layers in reducing approximation error.
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization
[ "Ruiqi Zhang", "Jingfeng Wu", "Peter Bartlett" ]
NeurIPS.cc/2024/Conference
2402.14951
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tg2EVad7VF
@inproceedings{ tan2024diffnorm, title={DiffNorm: Self-Supervised Normalization for Non-autoregressive Speech-to-speech Translation}, author={Weiting Tan and Jingyu Zhang and Lingfeng Shen and Daniel Khashabi and Philipp Koehn}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tg2EVad7VF} }
Non-autoregressive Transformers (NATs) are recently applied in direct speech-to-speech translation systems, which convert speech across different languages without intermediate text data. Although NATs generate high-quality outputs and offer faster inference than autoregressive models, they tend to produce incoherent and repetitive results due to complex data distribution (e.g., acoustic and linguistic variations in speech). In this work, we introduce DiffNorm, a diffusion-based normalization strategy that simplifies data distributions for training NAT models. After training with a self-supervised noise estimation objective, DiffNorm constructs normalized target data by denoising synthetically corrupted speech features. Additionally, we propose to regularize NATs with classifier-free guidance, improving model robustness and translation quality by randomly dropping out source information during training. Our strategies result in a notable improvement of about $+7$ ASR-BLEU for English-Spanish (En-Es) translation and $+2$ ASR-BLEU for English-French (En-Fr) on the CVSS benchmark, while attaining over $14\times$ speedup for En-Es and $5 \times$ speedup for En-Fr translations compared to autoregressive baselines.
DiffNorm: Self-Supervised Normalization for Non-autoregressive Speech-to-speech Translation
[ "Weiting Tan", "Jingyu Zhang", "Lingfeng Shen", "Daniel Khashabi", "Philipp Koehn" ]
NeurIPS.cc/2024/Conference
2405.13274
[ "https://github.com/steventan0110/diffnorm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TeQvz5AlI8
@inproceedings{ li2024dat, title={{DAT}: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain}, author={Fengpeng Li and Kemou Li and Haiwei Wu and Jinyu Tian and Jiantao Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TeQvz5AlI8} }
To protect deep neural networks (DNNs) from adversarial attacks, adversarial training (AT) is developed by incorporating adversarial examples (AEs) into model training. Recent studies show that adversarial attacks disproportionately impact the patterns within the phase of the sample's frequency spectrum---typically containing crucial semantic information---more than those in the amplitude, resulting in the model's erroneous categorization of AEs. We find that, by mixing the amplitude of training samples' frequency spectrum with those of distractor images for AT, the model can be guided to focus on phase patterns unaffected by adversarial perturbations. As a result, the model's robustness can be improved. Unfortunately, it is still challenging to select appropriate distractor images, which should mix the amplitude without affecting the phase patterns. To this end, in this paper, we propose an optimized **Adversarial Amplitude Generator (AAG)** to achieve a better tradeoff between improving the model's robustness and retaining phase patterns. Based on this generator, together with an efficient AE production procedure, we design a new **Dual Adversarial Training (DAT)** strategy. Experiments on various datasets show that our proposed DAT leads to significantly improved robustness against diverse adversarial attacks. The source code is available at https://github.com/Feng-peng-Li/DAT.
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain
[ "Fengpeng Li", "Kemou Li", "Haiwei Wu", "Jinyu Tian", "Jiantao Zhou" ]
NeurIPS.cc/2024/Conference
2410.12307
[ "https://github.com/Feng-peng-Li/DAT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TeBKVfhP2M
@inproceedings{ nagle2024fundamental, title={Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models}, author={Alliot Nagle and Adway Girish and Marco Bondaschi and Michael Gastpar and Ashok Vardhan Makkuva and Hyeji Kim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TeBKVfhP2M} }
We formalize the problem of prompt compression for large language models (LLMs) and present a framework to unify token-level prompt compression methods which create hard prompts for black-box models. We derive the distortion-rate function for this setup as a linear program, and provide an efficient algorithm to compute this fundamental limit via the dual of the linear program. Using the distortion-rate function as the baseline, we study the performance of existing compression schemes on a synthetic dataset consisting of prompts generated from a Markov chain, natural language queries, and their respective answers. Our empirical analysis demonstrates the criticality of query-aware prompt compression, where the compressor has knowledge of the downstream task/query for the black-box LLM. We show that there is a large gap between the performance of current prompt compression methods and the optimal strategy, and propose Adaptive QuerySelect, a query-aware, variable-rate adaptation of a prior work to close the gap. We extend our experiments to a small natural language dataset to further confirm our findings on our synthetic dataset.
Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models
[ "Alliot Nagle", "Adway Girish", "Marco Bondaschi", "Michael Gastpar", "Ashok Vardhan Makkuva", "Hyeji Kim" ]
NeurIPS.cc/2024/Conference
2407.15504
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Te8vI2wGTh
@inproceedings{ qu2024hyperopinion, title={Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection}, author={Jingen Qu and Yufei Chen and Xiaodong Yue and Wei Fu and Qiguang Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Te8vI2wGTh} }
Evidential Deep Learning (EDL), grounded in Evidence Theory and Subjective Logic (SL), provides a robust framework to estimate uncertainty for out-of-distribution (OOD) detection alongside traditional classification probabilities.However, the EDL framework is constrained by its focus on evidence that supports only single categories, neglecting the other collective evidences that could corroborate multiple in-distribution categories. This limitation leads to a diminished estimation of uncertainty and a subsequent decline in OOD detection performance.Additionally, EDL encounters the vanishing gradient problem within its fully-connected layers, further degrading classification accuracy.To address these issues, we introduce hyper-domain and propose Hyper-opinion Evidential Deep Learning (HEDL). HEDL extends the evidence modeling paradigm by explicitly integrating sharp evidence, which supports a singular category, with vague evidence that accommodates multiple potential categories.Additionally, we propose a novel opinion projection mechanism that translates hyper-opinion into multinomial-opinion, which is then optimized within the EDL framework to ensure precise classification and refined uncertainty estimation.HEDL integrates evidences across various categories to yield a holistic evidentiary foundation for achieving superior OOD detection. Furthermore, our proposed opinion projection method effectively mitigates the vanishing gradient issue, ensuring classification accuracy without additional model complexity. Extensive experiments over many datasets demonstrate our proposed method outperforms existing OOD detection methods.
Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection
[ "Jingen Qu", "Yufei Chen", "Xiaodong Yue", "Wei Fu", "Qiguang Huang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tck41RANGK
@inproceedings{ modoranu2024microadam, title={MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence}, author={Ionut-Vlad Modoranu and Mher Safaryan and Grigory Malinovsky and Eldar Kurtic and Thomas Robert and Peter Richt{\'a}rik and Dan Alistarh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tck41RANGK} }
We propose a new variant of the Adam optimizer called MicroAdam that specifically minimizes memory overheads, while maintaining theoretical convergence guarantees. We achieve this by compressing the gradient information before it is fed into the optimizer state, thereby reducing its memory footprint significantly. We control the resulting compression error via a novel instance of the classical *error feedback* mechanism from distributed optimization in which *the error correction information is itself compressed* to allow for practical memory gains. We prove that the resulting approach maintains theoretical convergence guarantees competitive to those of AMSGrad, while providing good practical performance. Specifically, we show that MicroAdam can be implemented efficiently on GPUs: on both million-scale (BERT) and billion-scale (LLaMA) models, MicroAdam provides practical convergence competitive to that of the uncompressed Adam baseline, with lower memory usage and similar running time. Our code is available at https://github.com/IST-DASLab/MicroAdam.
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence
[ "Ionut-Vlad Modoranu", "Mher Safaryan", "Grigory Malinovsky", "Eldar Kurtic", "Thomas Robert", "Peter Richtárik", "Dan Alistarh" ]
NeurIPS.cc/2024/Conference
2405.15593
[ "https://github.com/ist-daslab/microadam" ]
https://huggingface.co/papers/2405.15593
1
1
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Tcft2V63Vd
@inproceedings{ chen2024unveiling, title={Unveiling The Matthew Effect Across Channels: Assessing Layer Width Sufficiency via Weight Norm Variance}, author={Yiting Chen and Jiazi Bu and Junchi Yan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Tcft2V63Vd} }
The trade-off between cost and performance has been a longstanding and critical issue for deep neural networks. One key factor affecting the computational cost is the width of each layer. However, in practice, the width of layers in a neural network is mostly empirically determined. In this paper, we show that a pattern regarding the variance of weight norm corresponding to different channels can indicate whether the layer is sufficiently wide and may help us better allocate computational resources across the layers. Starting from a simple intuition that channels with larger weights would have larger gradients and the difference in weight norm enlarges between channels with similar weight, we empirically validate that wide and narrow layers show two different patterns with experiments across different data modalities and network architectures. Based on the two different patterns, we identify three stages during training and explain each stage with corresponding evidence. We further propose to adjust the width based on the identified pattern and show that conventional layer width settings for CNNs could be adjusted to reduce the number of parameters while boosting the performance.
Unveiling The Matthew Effect Across Channels: Assessing Layer Width Sufficiency via Weight Norm Variance
[ "Yiting Chen", "Jiazi Bu", "Junchi Yan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TcCorXxNJQ
@inproceedings{ wang2024flora, title={{FL}o{RA}: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations}, author={Ziyao Wang and Zheyu Shen and Yexiao He and Guoheng Sun and Hongyi Wang and Lingjuan Lyu and Ang Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=TcCorXxNJQ} }
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients' local data through in-situ computation, eliminating the need for data movement. However, fine-tuning LLMs, given their massive scale of parameters, poses challenges for clients with constrained and heterogeneous resources in FL. Previous methods employed low-rank adaptation (LoRA) for efficient federated fine-tuning but utilized traditional FL aggregation strategies on LoRA adapters. This approach led to mathematically inaccurate aggregation noise, reducing fine-tuning effectiveness and failing to address heterogeneous LoRAs. In this work, we first highlight the mathematical incorrectness of LoRA aggregation in existing federated fine-tuning methods. We introduce a new approach called FLoRA that enables federated fine-tuning on heterogeneous LoRA adapters across clients through a novel stacking-based aggregation method. Our approach is noise-free and seamlessly supports heterogeneous LoRAs. Extensive experiments demonstrate FLoRA's superior performance in both homogeneous and heterogeneous settings, surpassing state-of-the-art methods. We envision this work as a milestone for efficient, privacy-preserving, and accurate federated fine-tuning of LLMs.
FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations
[ "Ziyao Wang", "Zheyu Shen", "Yexiao He", "Guoheng Sun", "Hongyi Wang", "Lingjuan Lyu", "Ang Li" ]
NeurIPS.cc/2024/Conference
2409.05976
[ "https://github.com/atp-1010/federatedllm" ]
https://huggingface.co/papers/2409.05976
1
0
0
7
[]
[]
[]
[]
[]
[]
1
poster