bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=3hcn0UxP72
@inproceedings{ nurisso2024topological, title={Topological obstruction to the training of shallow Re{LU} neural networks}, author={Marco Nurisso and Pierrick Leroy and Francesco Vaccarino}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3hcn0UxP72} }
Studying the interplay between the geometry of the loss landscape and the optimization trajectories of simple neural networks is a fundamental step for understanding their behavior in more complex settings. This paper reveals the presence of topological obstruction in the loss landscape of shallow ReLU neural networks trained using gradient flow. We discuss how the homogeneous nature of the ReLU activation function constrains the training trajectories to lie on a product of quadric hypersurfaces whose shape depends on the particular initialization of the network's parameters. When the neural network's output is a single scalar, we prove that these quadrics can have multiple connected components, limiting the set of reachable parameters during training. We analytically compute the number of these components and discuss the possibility of mapping one to the other through neuron rescaling and permutation. In this simple setting, we find that the non-connectedness results in a topological obstruction, which, depending on the initialization, can make the global optimum unreachable. We validate this result with numerical experiments.
Topological obstruction to the training of shallow ReLU neural networks
[ "Marco Nurisso", "Pierrick Leroy", "Francesco Vaccarino" ]
NeurIPS.cc/2024/Conference
2410.14837
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3gvGZhkkVt
@inproceedings{ esmati2024sea, title={{SEA}: State-Exchange Attention for High-Fidelity Physics Based Transformers}, author={Parsa Esmati and Amirhossein Dadashzadeh and Vahid Goodarzi Ardakani and Nicolas Larrosa and Nicol{\`o} Grilli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3gvGZhkkVt} }
Current approaches using sequential networks have shown promise in estimating field variables for dynamical systems, but they are often limited by high rollout errors. The unresolved issue of rollout error accumulation results in unreliable estimations as the network predicts further into the future, with each step's error compounding and leading to an increase in inaccuracy. Here, we introduce the State-Exchange Attention (SEA) module, a novel transformer-based module enabling information exchange between encoded fields through multi-head cross-attention. The cross-field multidirectional information exchange design enables all state variables in the system to exchange information with one another, capturing physical relationships and symmetries between fields. Additionally, we introduce an efficient ViT-like mesh autoencoder to generate spatially coherent mesh embeddings for a large number of meshing cells. The SEA integrated transformer demonstrates the state-of-the-art rollout error compared to other competitive baselines. Specifically, we outperform PbGMR-GMUS Transformer-RealNVP and GMR-GMUS Transformer, with a reduction in error of 88% and 91%, respectively. Furthermore, we demonstrate that the SEA module alone can reduce errors by 97% for state variables that are highly dependent on other states of the system. The repository for this work is available at: https://github.com/ParsaEsmati/SEA
SEA: State-Exchange Attention for High-Fidelity Physics Based Transformers
[ "Parsa Esmati", "Amirhossein Dadashzadeh", "Vahid Goodarzi Ardakani", "Nicolas Larrosa", "Nicolò Grilli" ]
NeurIPS.cc/2024/Conference
2410.15495
[ "https://github.com/parsaesmati/sea" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3gKsKFeuMA
@inproceedings{ zhou2024improving, title={Improving the Learning Capability of Small-size Image Restoration Network by Deep Fourier Shifting}, author={Man Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3gKsKFeuMA} }
State-of-the-art image restoration methods currently face challenges in terms of computational requirements and performance, making them impractical for deployment on edge devices such as phones and resource-limited devices. As a result, there is a need to develop alternative solutions with efficient designs that can achieve comparable performance to transformer or large-kernel methods. This motivates our research to explore techniques for improving the capability of small-size image restoration standing on the success secret of large receptive filed. Targeting at expanding receptive filed, spatial-shift operator tailored for efficient spatial communication and has achieved remarkable advances in high-level image classification tasks, like $S^2$-MLP and ShiftVit. However, its potential has rarely been explored in low-level image restoration tasks. The underlying reason behind this obstacle is that image restoration is sensitive to the spatial shift that occurs due to severe region-aware information loss, which exhibits a different behavior from high-level tasks. To address this challenge and unleash the potential of spatial shift for image restoration, we propose an information-lossless shifting operator, i.e., Deep Fourier Shifting, that is customized for image restoration. To develop our proposed operator, we first revisit the principle of shift operator and apply it to the Fourier domain, where the shift operator can be modeled in an information-lossless Fourier cycling manner. Inspired by Fourier cycling, we design two variants of Deep Fourier Shifting, namely the amplitude-phase variant and the real-imaginary variant. These variants are generic operators that can be directly plugged into existing image restoration networks as a drop-in replacement for the standard convolution unit, consuming fewer parameters. Extensive experiments across multiple low-level tasks including image denoising, low-light image enhancement, guided image super-resolution, and image de-blurring demonstrate consistent performance gains obtained by our Deep Fourier Shifting while reducing the computation burden. Additionally, ablation studies verify the robustness of the shift displacement with stable performance improvement.
Improving the Learning Capability of Small-size Image Restoration Network by Deep Fourier Shifting
[ "Man Zhou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3f8i9GlBzu
@inproceedings{ taleb2024can, title={Can Transformers Smell Like Humans?}, author={Farzaneh Taleb and Miguel Vasco and Antonio H. Ribeiro and M{\r{a}}rten Bj{\"o}rkman and Danica Kragic}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3f8i9GlBzu} }
The human brain encodes stimuli from the environment into representations that form a sensory perception of the world. Despite recent advances in understanding visual and auditory perception, olfactory perception remains an under-explored topic in the machine learning community due to the lack of large-scale datasets annotated with labels of human olfactory perception. In this work, we ask the question of whether pre-trained transformer models of chemical structures encode representations that are aligned with human olfactory perception, i.e., can transformers smell like humans? We demonstrate that representations encoded from transformers pre-trained on general chemical structures are highly aligned with human olfactory perception. We use multiple datasets and different types of perceptual representations to show that the representations encoded by transformer models are able to predict: (i) labels associated with odorants‌‌ provided by experts; (ii) continuous ratings provided by human participants with respect to pre-defined descriptors; and (iii) similarity ratings between odorants provided by human participants. Finally, we evaluate the extent to which this alignment is associated with physicochemical features of odorants known to be relevant for olfactory decoding.
Can Transformers Smell Like Humans?
[ "Farzaneh Taleb", "Miguel Vasco", "Antonio H. Ribeiro", "Mårten Björkman", "Danica Kragic" ]
NeurIPS.cc/2024/Conference
2411.03038
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=3dn1hINA6o
@inproceedings{ sims2024the, title={The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning}, author={Anya Sims and Cong Lu and Jakob Nicolaus Foerster and Yee Whye Teh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3dn1hINA6o} }
Offline reinforcement learning (RL) aims to train agents from pre-collected datasets. However, this comes with the added challenge of estimating the value of behaviors not covered in the dataset. Model-based methods offer a potential solution by training an approximate dynamics model, which then allows collection of additional synthetic data via rollouts in this model. The prevailing theory treats this approach as online RL in an approximate dynamics model, and any remaining performance gap is therefore understood as being due to dynamics model errors. In this paper, we analyze this assumption and investigate how popular algorithms perform as the learned dynamics model is improved. In contrast to both intuition and theory, if the learned dynamics model is replaced by the true error-free dynamics, existing model-based methods completely fail. This reveals a key oversight: The theoretical foundations assume sampling of full horizon rollouts in the learned dynamics model; however, in practice, the number of model-rollout steps is aggressively reduced to prevent accumulating errors. We show that this truncation of rollouts results in a set of edge-of-reach states at which we are effectively "bootstrapping from the void." This triggers pathological value overestimation and complete performance collapse. We term this the edge-of-reach problem. Based on this new insight, we fill important gaps in existing theory, and reveal how prior model-based methods are primarily addressing the edge-of-reach problem, rather than model-inaccuracy as claimed. Finally, we propose Reach-Aware Value Learning (RAVL), a simple and robust method that directly addresses the edge-of-reach problem and hence - unlike existing methods - does not fail as the dynamics model is improved. Since world models will inevitably improve, we believe this is a key step towards future-proofing offline RL.
The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning
[ "Anya Sims", "Cong Lu", "Jakob Nicolaus Foerster", "Yee Whye Teh" ]
NeurIPS.cc/2024/Conference
2402.12527
[ "https://github.com/anyasims/edge-of-reach-ravl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3csuL7TVpV
@inproceedings{ shi2024decodingtime, title={Decoding-Time Language Model Alignment with Multiple Objectives}, author={Ruizhe Shi and Yifang Chen and Yushi Hu and Alisa Liu and Hannaneh Hajishirzi and Noah A. Smith and Simon Shaolei Du}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3csuL7TVpV} }
Aligning language models (LMs) to human preferences has emerged as a critical pursuit, enabling these models to better serve diverse user needs. Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives. Here, we propose $\textbf{multi-objective decoding~(MOD)}$, a decoding-time algorithm that outputs the next token from a linear combination of predictions of all base models, for any given weighting over different objectives. We exploit a common form among a family of $f$-divergence regularized alignment approaches (such as PPO, DPO, and their variants) to identify a closed-form solution by Legendre transform, and derive an efficient decoding strategy. Theoretically, we show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method. Empirical results demonstrate the effectiveness of the algorithm. For example, compared to a parameter-merging baseline, MOD achieves 12.8\% overall reward improvement when equally optimizing towards $3$ objectives. Moreover, we experiment with MOD on combining three fully-finetuned LMs of different model sizes, each aimed at different objectives such as safety, coding, and general user preference. Unlike traditional methods that require careful curation of a mixture of datasets to achieve comprehensive improvement, we can quickly experiment with preference weightings using MOD to find the best combination of models. Our best combination reduces toxicity on Toxigen to nearly 0\% and achieves 7.9--33.3\% improvement across three other metrics ($\textit{i.e.}$, Codex@1, GSM-COT, BBH-COT).
Decoding-Time Language Model Alignment with Multiple Objectives
[ "Ruizhe Shi", "Yifang Chen", "Yushi Hu", "Alisa Liu", "Hannaneh Hajishirzi", "Noah A. Smith", "Simon Shaolei Du" ]
NeurIPS.cc/2024/Conference
2406.18853
[ "https://github.com/srzer/mod" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3cb6pF3Tvf
@inproceedings{ zhao2024learningaugmented, title={Learning-Augmented Algorithms for the Bahncard Problem}, author={Hailiang Zhao and Xueyan Tang and Peng Chen and Shuiguang Deng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3cb6pF3Tvf} }
In this paper, we study learning-augmented algorithms for the Bahncard problem. The Bahncard problem is a generalization of the ski-rental problem, where a traveler needs to irrevocably and repeatedly decide between a cheap short-term solution and an expensive long-term one with an unknown future. Even though the problem is canonical, only a primal-dual-based learning-augmented algorithm was explicitly designed for it. We develop a new learning-augmented algorithm, named PFSUM, that incorporates both history and short-term future to improve online decision making. We derive the competitive ratio of PFSUM as a function of the prediction error and conduct extensive experiments to show that PFSUM outperforms the primal-dual-based algorithm.
Learning-Augmented Algorithms for the Bahncard Problem
[ "Hailiang Zhao", "Xueyan Tang", "Peng Chen", "Shuiguang Deng" ]
NeurIPS.cc/2024/Conference
2410.15257
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3cL2XDyaEB
@inproceedings{ zhang2024egonc, title={{EG}onc : Energy-based Open-Set Node Classification with substitute Unknowns}, author={Qin Zhang and Zelin Shi and Shirui Pan and Junyang Chen and Huisi Wu and Xiaojun Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3cL2XDyaEB} }
Open-set Classification (OSC) is a critical requirement for safely deploying machine learning models in the open world, which aims to classify samples from known classes and reject samples from out-of-distribution (OOD). Existing methods exploit the feature space of trained network and attempt at estimating the uncertainty in the predictions. However, softmax-based neural networks are found to be overly confident in their predictions even on data they have never seen before and the immense diversity of the OOD examples also makes such methods fragile. To this end, we follow the idea of estimating the underlying density of the training data to decide whether a given input is close to the in-distribution (IND) data and adopt Energy-based models (EBMs) as density estimators. A novel energy-based generative open-set node classification method, \textit{EGonc}, is proposed to achieve open-set graph learning. Specifically, we generate substitute unknowns to mimic the distribution of real open-set samples firstly, based on the information of graph structures. Then, an additional energy logit representing the virtual OOD class is learned from the residual of the feature against the principal space, and matched with the original logits by a constant scaling. This virtual logit serves as the indicator of OOD-ness. EGonc has nice theoretical properties that guarantee an overall distinguishable margin between the detection scores for IND and OOD samples. Comprehensive experimental evaluations of EGonc also demonstrate its superiority.
EGonc : Energy-based Open-Set Node Classification with substitute Unknowns
[ "Qin Zhang", "Zelin Shi", "Shirui Pan", "Junyang Chen", "Huisi Wu", "Xiaojun Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3apt5AJ5QN
@inproceedings{ raman2024global, title={Global Rewards in Restless Multi-Armed Bandits}, author={Naveen Janaki Raman and Zheyuan Ryan Shi and Fei Fang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3apt5AJ5QN} }
Restless multi-armed bandits (RMAB) extend multi-armed bandits so arm pulls impact future arm states. Despite the success of RMABs, a key limiting assumption is the separability of rewards into a sum across arms. We address this deficiency by proposing restless-multi-armed bandit with global rewards (RMAB-G), a generalization of RMABs to global non-separable rewards. To solve RMAB-G, we develop the Linear-Whittle and Shapley-Whittle indices, which extend Whittle indices from RMABs to RMAB-Gs. We prove approximation bounds which demonstrate how Linear and Shapley-Whittle indices fail for non-linear rewards. To overcome this limitation, we propose two sets of adaptive policies: the first computes indices iteratively and the second combines indices with Monte-Carlo Tree Search (MCTS). Empirically, we demonstrate that adaptive policies outperform both pre-computed index policies and baselines in synthetic and real-world food rescue datasets.
Global Rewards in Restless Multi-Armed Bandits
[ "Naveen Janaki Raman", "Zheyuan Ryan Shi", "Fei Fang" ]
NeurIPS.cc/2024/Conference
2406.00738
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3ZAfFoAcUI
@inproceedings{ saunshi2024on, title={On the Inductive Bias of Stacking Towards Improving Reasoning}, author={Nikunj Saunshi and Stefani Karp and Shankar Krishnan and Sobhan Miryoosefi and Sashank J. Reddi and Sanjiv Kumar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3ZAfFoAcUI} }
Given the increasing scale of model sizes, efficient training strategies like gradual stacking have garnered interest. Stacking enables efficient training by gradually growing the depth of a model in stages and using layers from a smaller model in an earlier stage to initialize the next stage. Although efficient for training, the model biases induced by such growing approaches are largely unexplored. In this work, we examine this fundamental aspect of gradual stacking, going beyond its efficiency benefits. We propose a variant of gradual stacking called MIDAS that can speed up language model training by up to 40\%. Furthermore we discover an intriguing phenomenon: MIDAS is not only training-efficient but surprisingly also has an inductive bias towards improving downstream tasks, especially tasks that require reasoning abilities like reading comprehension and math problems, despite having similar or slightly worse perplexity compared to baseline training. To further analyze this inductive bias, we construct {\em reasoning primitives} – simple synthetic tasks that are building blocks for reasoning – and find that a model pretrained with stacking is significantly better than standard pretraining on these primitives, with and without fine-tuning. This provides stronger and more robust evidence for this inductive bias towards reasoning. These findings of training efficiency and inductive bias towards reasoning are verified at 1B, 2B and 8B parameter language models. Finally, we conjecture the underlying reason for this inductive bias by exploring the connection of stacking to looped models and provide strong supporting empirical analysis.
On the Inductive Bias of Stacking Towards Improving Reasoning
[ "Nikunj Saunshi", "Stefani Karp", "Shankar Krishnan", "Sobhan Miryoosefi", "Sashank J. Reddi", "Sanjiv Kumar" ]
NeurIPS.cc/2024/Conference
2409.19044
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3Z0LTDjIM0
@inproceedings{ bai2024faster, title={Faster Local Solvers for Graph Diffusion Equations}, author={Jiahe Bai and Baojian Zhou and Deqing Yang and Yanghua Xiao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3Z0LTDjIM0} }
Efficient computation of graph diffusion equations (GDEs), such as Personalized PageRank, Katz centrality, and the Heat kernel, is crucial for clustering, training neural networks, and many other graph-related problems. Standard iterative methods require accessing the whole graph per iteration, making them time-consuming for large-scale graphs. While existing local solvers approximate diffusion vectors through heuristic local updates, they often operate sequentially and are typically designed for specific diffusion types, limiting their applicability. Given that diffusion vectors are highly localizable, as measured by the participation ratio, this paper introduces a novel framework for approximately solving GDEs using a local diffusion process. This framework reveals the suboptimality of existing local solvers. Furthermore, our approach effectively localizes standard iterative solvers by designing simple and provably sublinear time algorithms. These new local solvers are highly parallelizable, making them well-suited for implementation on GPUs. We demonstrate the effectiveness of our framework in quickly obtaining approximate diffusion vectors, achieving up to a hundred-fold speed improvement, and its applicability to large-scale dynamic graphs. Our framework could also facilitate more efficient local message-passing mechanisms for GNNs.
Faster Local Solvers for Graph Diffusion Equations
[ "Jiahe Bai", "Baojian Zhou", "Deqing Yang", "Yanghua Xiao" ]
NeurIPS.cc/2024/Conference
2410.21634
[ "https://github.com/JiaheBai/Faster-Local-Solver-for-GDEs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3YkeHuT1o6
@inproceedings{ liao2024a, title={A Swiss Army Knife for Heterogeneous Federated Learning: Flexible Coupling via Trace Norm}, author={Tianchi Liao and Lele Fu and Jialong Chen and Zhen WANG and Zibin Zheng and Chuan Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3YkeHuT1o6} }
The heterogeneity issue in federated learning (FL) has attracted increasing attention, which is attempted to be addressed by most existing methods. Currently, due to systems and objectives heterogeneity, enabling clients to hold models of different architectures and tasks of different demands has become an important direction in FL. Most existing FL methods are based on the homogeneity assumption, namely, different clients have the same architectural models with the same tasks, which are unable to handle complex and multivariate data and tasks. To flexibly address these heterogeneity limitations, we propose a novel federated multi-task learning framework with the help of tensor trace norm, FedSAK. Specifically, it treats each client as a task and splits the local model into a feature extractor and a prediction head. Clients can flexibly choose shared structures based on heterogeneous situations and upload them to the server, which learns correlations among client models by mining model low-rank structures through tensor trace norm. Furthermore, we derive convergence and generalization bounds under non-convex settings. Evaluated on 6 real-world datasets compared to 13 advanced FL models, FedSAK demonstrates superior performance.
A Swiss Army Knife for Heterogeneous Federated Learning: Flexible Coupling via Trace Norm
[ "Tianchi Liao", "Lele Fu", "Jialong Chen", "Zhen WANG", "Zibin Zheng", "Chuan Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3YIyB82rjX
@inproceedings{ tan2024handling, title={Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation}, author={Peng Tan and Hai-Tian Liu and Zhi-Hao Tan and Zhi-Hua Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3YIyB82rjX} }
The learnware paradigm aims to help users leverage numerous existing high-performing models instead of starting from scratch, where a learnware consists of a well-trained model and the specification describing its capability. Numerous learnwares are accommodated by a learnware dock system. When users solve tasks with the system, models that fully match the task feature space are often rare or even unavailable. However, models with heterogeneous feature space can still be helpful. This paper finds that label information, particularly model outputs, is helpful yet previously less exploited in the accommodation of heterogeneous learnwares. We extend the specification to better leverage model pseudo-labels and subsequently enrich the unified embedding space for better specification evolvement. With label information, the learnware identification can also be improved by additionally comparing conditional distributions. Experiments demonstrate that, even without a model explicitly tailored to user tasks, the system can effectively handle tasks by leveraging models from diverse feature spaces.
Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation
[ "Peng Tan", "Hai-Tian Liu", "Zhi-Hao Tan", "Zhi-Hua Zhou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3XnBVK9sD6
@inproceedings{ miao2024inform, title={Info{RM}: Mitigating Reward Hacking in {RLHF} via Information-Theoretic Reward Modeling}, author={Yuchun Miao and Sen Zhang and Liang Ding and Rong Bao and Lefei Zhang and Dacheng Tao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3XnBVK9sD6} }
Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models with human values, reward hacking, also termed reward overoptimization, remains a critical challenge. This issue primarily arises from reward misgeneralization, where reward models (RMs) compute reward using spurious features that are irrelevant to human preferences. In this work, we tackle this problem from an information-theoretic perspective and propose a framework for reward modeling, namely InfoRM, by introducing a variational information bottleneck objective to filter out irrelevant information. Notably, we further identify a correlation between overoptimization and outliers in the IB latent space of InfoRM, establishing it as a promising tool for detecting reward overoptimization. Inspired by this finding, we propose the Cluster Separation Index (CSI), which quantifies deviations in the IB latent space, as an indicator of reward overoptimization to facilitate the development of online mitigation strategies. Extensive experiments on a wide range of settings and RM scales (70M, 440M, 1.4B, and 7B) demonstrate the effectiveness of InfoRM. Further analyses reveal that InfoRM's overoptimization detection mechanism is not only effective but also robust across a broad range of datasets, signifying a notable advancement in the field of RLHF. The code will be released upon acceptance.
InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling
[ "Yuchun Miao", "Sen Zhang", "Liang Ding", "Rong Bao", "Lefei Zhang", "Dacheng Tao" ]
NeurIPS.cc/2024/Conference
2402.09345
[ "https://github.com/miaoyuchun/inform" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3XLQp2Xx3J
@inproceedings{ zhang2024gshider, title={{GS}-Hider: Hiding Messages into 3D Gaussian Splatting}, author={Xuanyu Zhang and Jiarui Meng and Runyi Li and Zhipei Xu and Yongbing Zhang and Jian Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3XLQp2Xx3J} }
3D Gaussian Splatting (3DGS) has already become the emerging research focus in the fields of 3D scene reconstruction and novel view synthesis. Given that training a 3DGS requires a significant amount of time and computational cost, it is crucial to protect the copyright, integrity, and privacy of such 3D assets. Steganography, as a crucial technique for encrypted transmission and copyright protection, has been extensively studied. However, it still lacks profound exploration targeted at 3DGS. Unlike its predecessor NeRF, 3DGS possesses two distinct features: 1) explicit 3D representation; and 2) real-time rendering speeds. These characteristics result in the 3DGS point cloud files being public and transparent, with each Gaussian point having a clear physical significance. Therefore, ensuring the security and fidelity of the original 3D scene while embedding information into the 3DGS point cloud files is an extremely challenging task. To solve the above-mentioned issue, we first propose a steganography framework for 3DGS, dubbed GS-Hider, which can embed 3D scenes and images into original GS point clouds in an invisible manner and accurately extract the hidden messages. Specifically, we design a coupled secured feature attribute to replace the original 3DGS's spherical harmonics coefficients and then use a scene decoder and a message decoder to disentangle the original RGB scene and the hidden message. Extensive experiments demonstrated that the proposed GS-Hider can effectively conceal multimodal messages without compromising rendering quality and possesses exceptional security, robustness, capacity, and flexibility. Our project is available at: https://xuanyuzhang21.github.io/project/gshider.
GS-Hider: Hiding Messages into 3D Gaussian Splatting
[ "Xuanyu Zhang", "Jiarui Meng", "Runyi Li", "Zhipei Xu", "Yongbing Zhang", "Jian Zhang" ]
NeurIPS.cc/2024/Conference
2405.15118
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3Tzcot1LKb
@inproceedings{ meng2024simpo, title={Sim{PO}: Simple Preference Optimization with a Reference-Free Reward}, author={Yu Meng and Mengzhou Xia and Danqi Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3Tzcot1LKb} }
Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectiveness of SimPO is attributed to a key design: using the _average_ log probability of a sequence as the implicit reward. This reward formulation better aligns with model generation and eliminates the need for a reference model, making it more compute and memory efficient. Additionally, we introduce a target reward margin to the Bradley-Terry objective to encourage a larger margin between the winning and losing responses, further improving the algorithm's performance. We compare SimPO to DPO and its latest variants across various state-of-the-art training setups, including both base and instruction-tuned models such as Mistral, Llama 3, and Gemma 2. We evaluate on extensive chat-based evaluation benchmarks, including AlpacaEval 2, MT-Bench, and Arena-Hard. Our results demonstrate that SimPO consistently and significantly outperforms existing approaches without substantially increasing response length. Specifically, SimPO outperforms DPO by up to 6.4 points on AlpacaEval 2 and by up to 7.5 points on Arena-Hard. Our top-performing model, built on Gemma-2-9B-it, achieves a 72.4\% length-controlled win rate on AlpacaEval 2, a 59.1\% win rate on Arena-Hard, and ranks 1st on Chatbot Arena among $<$10B models with real user votes.
SimPO: Simple Preference Optimization with a Reference-Free Reward
[ "Yu Meng", "Mengzhou Xia", "Danqi Chen" ]
NeurIPS.cc/2024/Conference
2405.14734
[ "https://github.com/princeton-nlp/simpo" ]
https://huggingface.co/papers/2405.14734
1
11
1
3
[ "princeton-nlp/gemma-2-9b-it-SimPO", "princeton-nlp/Llama-3-Instruct-8B-SimPO", "AALF/gemma-2-27b-it-SimPO-37K", "Magpie-Align/Llama-3-8B-Magpie-Align-v0.1", "AALF/gemma-2-27b-it-SimPO-37K-100steps", "princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2", "princeton-nlp/gemma-2-9b-it-DPO", "Magpie-Align/Llama-3-8B-Magpie-Align-v0.3", "QuantFactory/gemma-2-9b-it-SimPO-GGUF-v2", "QuantFactory/gemma-2-9b-it-DPO-GGUF", "grimjim/Kitsunebi-v1-Gemma2-8k-9B", "QuantFactory/gemma-2-9b-it-SimPO-GGUF", "Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1", "princeton-nlp/Llama-3-Instruct-8B-RDPO-v0.2", "Magpie-Align/Llama-3-8B-Magpie-Align-v0.2", "princeton-nlp/Llama-3-Base-8B-SFT", "QuantFactory/Mistral-7B-Instruct-SLiC-HF-GGUF", "QuantFactory/Mistral-7B-Instruct-RDPO-GGUF", "QuantFactory/Mistral-7B-Base-SFT-RDPO-GGUF", "QuantFactory/Mistral-7B-Base-SFT-SimPO-GGUF", "QuantFactory/Mistral-7B-Instruct-DPO-GGUF", "grimjim/Kitsunebi-v1-Gemma2-8k-9B-GGUF", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-gguf", "princeton-nlp/Mistral-7B-Base-SFT-CPO", "princeton-nlp/Mistral-7B-Base-SFT-RRHF", "princeton-nlp/Mistral-7B-Base-SFT-SLiC-HF", "princeton-nlp/Mistral-7B-Instruct-CPO", "princeton-nlp/Mistral-7B-Instruct-RRHF", "princeton-nlp/Mistral-7B-Instruct-SLiC-HF", "princeton-nlp/Llama-3-Base-8B-SFT-CPO", "princeton-nlp/Llama-3-Base-8B-SFT-RRHF", "princeton-nlp/Llama-3-Base-8B-SFT-SLiC-HF", "princeton-nlp/Llama-3-Instruct-8B-CPO", "princeton-nlp/Llama-3-Instruct-8B-RRHF", "princeton-nlp/Llama-3-Instruct-8B-SLiC-HF", "princeton-nlp/Llama-3-Instruct-8B-RRHF-v0.2", "princeton-nlp/Llama-3-Instruct-8B-SLiC-HF-v0.2", "princeton-nlp/Llama-3-Instruct-8B-DPO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-IPO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-CPO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-KTO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-ORPO-v0.2", "sunatte/txt2sql", "princeton-nlp/Llama-3-Instruct-8B-DPO", "princeton-nlp/Llama-3-Instruct-8B-IPO", "princeton-nlp/Llama-3-Instruct-8B-KTO", "princeton-nlp/Llama-3-Instruct-8B-ORPO", "princeton-nlp/Llama-3-Instruct-8B-RDPO", "princeton-nlp/Llama-3-Base-8B-SFT-DPO", "princeton-nlp/Llama-3-Base-8B-SFT-IPO", "princeton-nlp/Llama-3-Base-8B-SFT-KTO", "princeton-nlp/Llama-3-Base-8B-SFT-ORPO", "princeton-nlp/Llama-3-Base-8B-SFT-RDPO", "princeton-nlp/Mistral-7B-Instruct-DPO", "princeton-nlp/Mistral-7B-Instruct-IPO", "princeton-nlp/Mistral-7B-Instruct-KTO", "princeton-nlp/Mistral-7B-Instruct-ORPO", "princeton-nlp/Mistral-7B-Instruct-RDPO", "princeton-nlp/Mistral-7B-Base-SFT-DPO", "princeton-nlp/Mistral-7B-Base-SFT-IPO", "princeton-nlp/Mistral-7B-Base-SFT-KTO", "princeton-nlp/Mistral-7B-Base-SFT-RDPO", "princeton-nlp/Mistral-7B-Base-SFT-SimPO", "QuantFactory/Llama-3-Instruct-8B-SimPO-GGUF", "QuantFactory/Llama-3-Instruct-8B-DPO-GGUF", "QuantFactory/Llama-3-Instruct-8B-RDPO-GGUF", "postitive666/Llama3-Instruct-8B-SimPO", "RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf", "RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf", "gohsyi/Llama-3-8B-SFT", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-SimPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-DPO-gguf", "RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-SimPO-v0.2-gguf", "RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.3-gguf", "RichardErkhov/grimjim_-_Kitsunebi-v1-Gemma2-8k-9B-gguf", "fakezeta/gemma-2-9b-it-SimPO-ov-int8", "fakezeta/gemma-2-9b-it-SimPO-ov-int4", "RichardErkhov/AALF_-_gemma-2-27b-it-SimPO-37K-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Instruct-DPO-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-DPO-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-KTO-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-SimPO-gguf", "RichardErkhov/AALF_-_gemma-2-27b-it-SimPO-37K-100steps-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RDPO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-ORPO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-KTO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-DPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-ORPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-RDPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-CPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RDPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RRHF-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RRHF-v0.2-gguf", "RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-4bits", "mav23/gemma-2-9b-it-SimPO-GGUF", "RichardErkhov/princeton-nlp_-_Mistral-7B-Instruct-SLiC-HF-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Instruct-ORPO-gguf" ]
[]
[ "allenai/WildBench", "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "allenai/ZebraLogic", "logikon/open_cot_leaderboard", "KwabsHug/GameConfigIdea", "Justinrune/LLaMA-Factory", "Granther/try-this-model", "cot-leaderboard/open-cot-dashboard", "Darok/Featherless-Feud", "flydust/Chat-with-Magpie", "Veronika1101/Rewrites", "emekaboris/try-this-model", "StevenChen16/LLama3-Compliance-Review", "SC999/NV_Nemotron", "AFischer1985/Frag-dein-PDF", "John6666/votepurchase-crash", "meldynamics/gemma-2-9b-it-SimPO", "smarttang/blingsec" ]
[ "princeton-nlp/gemma-2-9b-it-SimPO", "princeton-nlp/Llama-3-Instruct-8B-SimPO", "AALF/gemma-2-27b-it-SimPO-37K", "Magpie-Align/Llama-3-8B-Magpie-Align-v0.1", "AALF/gemma-2-27b-it-SimPO-37K-100steps", "princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2", "princeton-nlp/gemma-2-9b-it-DPO", "Magpie-Align/Llama-3-8B-Magpie-Align-v0.3", "QuantFactory/gemma-2-9b-it-SimPO-GGUF-v2", "QuantFactory/gemma-2-9b-it-DPO-GGUF", "grimjim/Kitsunebi-v1-Gemma2-8k-9B", "QuantFactory/gemma-2-9b-it-SimPO-GGUF", "Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1", "princeton-nlp/Llama-3-Instruct-8B-RDPO-v0.2", "Magpie-Align/Llama-3-8B-Magpie-Align-v0.2", "princeton-nlp/Llama-3-Base-8B-SFT", "QuantFactory/Mistral-7B-Instruct-SLiC-HF-GGUF", "QuantFactory/Mistral-7B-Instruct-RDPO-GGUF", "QuantFactory/Mistral-7B-Base-SFT-RDPO-GGUF", "QuantFactory/Mistral-7B-Base-SFT-SimPO-GGUF", "QuantFactory/Mistral-7B-Instruct-DPO-GGUF", "grimjim/Kitsunebi-v1-Gemma2-8k-9B-GGUF", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-gguf", "princeton-nlp/Mistral-7B-Base-SFT-CPO", "princeton-nlp/Mistral-7B-Base-SFT-RRHF", "princeton-nlp/Mistral-7B-Base-SFT-SLiC-HF", "princeton-nlp/Mistral-7B-Instruct-CPO", "princeton-nlp/Mistral-7B-Instruct-RRHF", "princeton-nlp/Mistral-7B-Instruct-SLiC-HF", "princeton-nlp/Llama-3-Base-8B-SFT-CPO", "princeton-nlp/Llama-3-Base-8B-SFT-RRHF", "princeton-nlp/Llama-3-Base-8B-SFT-SLiC-HF", "princeton-nlp/Llama-3-Instruct-8B-CPO", "princeton-nlp/Llama-3-Instruct-8B-RRHF", "princeton-nlp/Llama-3-Instruct-8B-SLiC-HF", "princeton-nlp/Llama-3-Instruct-8B-RRHF-v0.2", "princeton-nlp/Llama-3-Instruct-8B-SLiC-HF-v0.2", "princeton-nlp/Llama-3-Instruct-8B-DPO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-IPO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-CPO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-KTO-v0.2", "princeton-nlp/Llama-3-Instruct-8B-ORPO-v0.2", "sunatte/txt2sql", "princeton-nlp/Llama-3-Instruct-8B-DPO", "princeton-nlp/Llama-3-Instruct-8B-IPO", "princeton-nlp/Llama-3-Instruct-8B-KTO", "princeton-nlp/Llama-3-Instruct-8B-ORPO", "princeton-nlp/Llama-3-Instruct-8B-RDPO", "princeton-nlp/Llama-3-Base-8B-SFT-DPO", "princeton-nlp/Llama-3-Base-8B-SFT-IPO", "princeton-nlp/Llama-3-Base-8B-SFT-KTO", "princeton-nlp/Llama-3-Base-8B-SFT-ORPO", "princeton-nlp/Llama-3-Base-8B-SFT-RDPO", "princeton-nlp/Mistral-7B-Instruct-DPO", "princeton-nlp/Mistral-7B-Instruct-IPO", "princeton-nlp/Mistral-7B-Instruct-KTO", "princeton-nlp/Mistral-7B-Instruct-ORPO", "princeton-nlp/Mistral-7B-Instruct-RDPO", "princeton-nlp/Mistral-7B-Base-SFT-DPO", "princeton-nlp/Mistral-7B-Base-SFT-IPO", "princeton-nlp/Mistral-7B-Base-SFT-KTO", "princeton-nlp/Mistral-7B-Base-SFT-RDPO", "princeton-nlp/Mistral-7B-Base-SFT-SimPO", "QuantFactory/Llama-3-Instruct-8B-SimPO-GGUF", "QuantFactory/Llama-3-Instruct-8B-DPO-GGUF", "QuantFactory/Llama-3-Instruct-8B-RDPO-GGUF", "postitive666/Llama3-Instruct-8B-SimPO", "RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf", "RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf", "gohsyi/Llama-3-8B-SFT", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-SimPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-DPO-gguf", "RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-SimPO-v0.2-gguf", "RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.3-gguf", "RichardErkhov/grimjim_-_Kitsunebi-v1-Gemma2-8k-9B-gguf", "fakezeta/gemma-2-9b-it-SimPO-ov-int8", "fakezeta/gemma-2-9b-it-SimPO-ov-int4", "RichardErkhov/AALF_-_gemma-2-27b-it-SimPO-37K-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Instruct-DPO-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-DPO-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-KTO-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-SimPO-gguf", "RichardErkhov/AALF_-_gemma-2-27b-it-SimPO-37K-100steps-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RDPO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-ORPO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-KTO-v0.2-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-DPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-ORPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-RDPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Base-8B-SFT-CPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RDPO-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RRHF-gguf", "RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-RRHF-v0.2-gguf", "RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-4bits", "mav23/gemma-2-9b-it-SimPO-GGUF", "RichardErkhov/princeton-nlp_-_Mistral-7B-Instruct-SLiC-HF-gguf", "RichardErkhov/princeton-nlp_-_Mistral-7B-Instruct-ORPO-gguf" ]
[]
[ "allenai/WildBench", "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "allenai/ZebraLogic", "logikon/open_cot_leaderboard", "KwabsHug/GameConfigIdea", "Justinrune/LLaMA-Factory", "Granther/try-this-model", "cot-leaderboard/open-cot-dashboard", "Darok/Featherless-Feud", "flydust/Chat-with-Magpie", "Veronika1101/Rewrites", "emekaboris/try-this-model", "StevenChen16/LLama3-Compliance-Review", "SC999/NV_Nemotron", "AFischer1985/Frag-dein-PDF", "John6666/votepurchase-crash", "meldynamics/gemma-2-9b-it-SimPO", "smarttang/blingsec" ]
1
poster
null
https://openreview.net/forum?id=3TxyhBZHT2
@inproceedings{ man2024lexicond, title={Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding}, author={Yunze Man and Shuhong Zheng and Zhipeng Bao and Martial Hebert and Liangyan Gui and Yu-Xiong Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3TxyhBZHT2} }
Complex 3D scene understanding has gained increasing attention, with scene encoding strategies built on top of visual foundation models playing a crucial role in this success. However, the optimal scene encoding strategies for various scenarios remain unclear, particularly compared to their image-based counterparts. To address this issue, we present the first comprehensive study that probes various visual encoding models for 3D scene understanding, identifying the strengths and limitations of each model across different scenarios. Our evaluation spans seven vision foundation encoders, including image, video, and 3D foundation models. We evaluate these models in four tasks: Vision-Language Scene Reasoning, Visual Grounding, Segmentation, and Registration, each focusing on different aspects of scene understanding. Our evaluation yields key intriguing findings: Unsupervised image foundation models demonstrate superior overall performance, video models excel in object-level tasks, diffusion models benefit geometric tasks, language-pretrained models show unexpected limitations in language-related tasks, and the mixture-of-vision-expert (MoVE) strategy leads to consistent performance improvement. These insights challenge some conventional understandings, provide novel perspectives on leveraging visual foundation models, and highlight the need for more flexible encoder selection in future vision-language and scene understanding tasks.
Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding
[ "Yunze Man", "Shuhong Zheng", "Zhipeng Bao", "Martial Hebert", "Liangyan Gui", "Yu-Xiong Wang" ]
NeurIPS.cc/2024/Conference
2409.03757
[ "" ]
https://huggingface.co/papers/2409.03757
1
2
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=3SzrqwupUx
@inproceedings{ cirone2024theoretical, title={Theoretical Foundations of Deep Selective State-Space Models}, author={Nicola Muca Cirone and Antonio Orvieto and Benjamin Walker and Cristopher Salvi and Terry Lyons}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3SzrqwupUx} }
Structured state-space models (SSMs) are gaining popularity as effective foundational architectures for sequential data, demonstrating outstanding performance across a diverse set of domains alongside desirable scalability properties. Recent developments show that if the linear recurrence powering SSMs allows for a selectivity mechanism leveraging multiplicative interactions between inputs and hidden states (e.g. Mamba, GLA, Hawk/Griffin, HGRN2), then the resulting architecture can surpass attention-powered foundation models trained on text in both accuracy and efficiency, at scales of billion parameters. In this paper, we give theoretical grounding to the selectivity mechanism, often linked to in-context learning, using tools from Rough Path Theory. We provide a framework for the theoretical analysis of generalized selective SSMs, fully characterizing their expressive power and identifying the gating mechanism as the crucial architectural choice. Our analysis provides a closed-form description of the expressive powers of modern SSMs, such as Mamba, quantifying theoretically the drastic improvement in performance from the previous generation of models, such as S4. Our theory not only motivates the success of modern selective state-space models, but also provides a solid framework to understand the expressive power of future SSM variants. In particular, it suggests cross-channel interactions could play a vital role in future improvements.
Theoretical Foundations of Deep Selective State-Space Models
[ "Nicola Muca Cirone", "Antonio Orvieto", "Benjamin Walker", "Cristopher Salvi", "Terry Lyons" ]
NeurIPS.cc/2024/Conference
2402.19047
[ "https://github.com/benjamin-walker/selective-ssms-and-linear-cdes" ]
https://huggingface.co/papers/2402.19047
0
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=3RxcarQFRn
@inproceedings{ yao2024generative, title={Generative Adversarial Model-Based Optimization via Source Critic Regularization}, author={Michael S Yao and Yimeng Zeng and Hamsa Bastani and Jacob R. Gardner and James Gee and Osbert Bastani}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3RxcarQFRn} }
Offline model-based optimization seeks to optimize against a learned surrogate model without querying the true oracle objective function during optimization. Such tasks are commonly encountered in protein design, robotics, and clinical medicine where evaluating the oracle function is prohibitively expensive. However, inaccurate surrogate model predictions are frequently encountered along offline optimization trajectories. To address this limitation, we propose *generative adversarial model-based optimization* using **adaptive source critic regularization (aSCR)**—a task- and optimizer- agnostic framework for constraining the optimization trajectory to regions of the design space where the surrogate function is reliable. We propose a computationally tractable algorithm to dynamically adjust the strength of this constraint, and show how leveraging aSCR with standard Bayesian optimization outperforms existing methods on a suite of offline generative design tasks. Our code is available at https://github.com/michael-s-yao/gabo.
Generative Adversarial Model-Based Optimization via Source Critic Regularization
[ "Michael S Yao", "Yimeng Zeng", "Hamsa Bastani", "Jacob R. Gardner", "James Gee", "Osbert Bastani" ]
NeurIPS.cc/2024/Conference
2402.06532
[ "https://github.com/michael-s-yao/gabo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3R7Go6WkDm
@inproceedings{ ranjan2024posthoc, title={Post-Hoc Reversal: Are We Selecting Models Prematurely?}, author={Rishabh Ranjan and Saurabh Garg and Mrigank Raman and Carlos Guestrin and Zachary Chase Lipton}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3R7Go6WkDm} }
Trained models are often composed with post-hoc transforms such as temperature scaling (TS), ensembling and stochastic weight averaging (SWA) to improve performance, robustness, uncertainty estimation, etc. However, such transforms are typically applied only after the base models have already been finalized by standard means. In this paper, we challenge this practice with an extensive empirical study. In particular, we demonstrate a phenomenon that we call post-hoc reversal, where performance trends are reversed after applying post-hoc transforms. This phenomenon is especially prominent in high-noise settings. For example, while base models overfit badly early in training, both ensembling and SWA favor base models trained for more epochs. Post-hoc reversal can also prevent the appearance of double descent and mitigate mismatches between test loss and test error seen in base models. Preliminary analyses suggest that these transforms induce reversal by suppressing the influence of mislabeled examples, exploiting differences in their learning dynamics from those of clean examples. Based on our findings, we propose post-hoc selection, a simple technique whereby post-hoc metrics inform model development decisions such as early stopping, checkpointing, and broader hyperparameter choices. Our experiments span real-world vision, language, tabular and graph datasets. On an LLM instruction tuning dataset, post-hoc selection results in >1.5x MMLU improvement compared to naive selection.
Post-Hoc Reversal: Are We Selecting Models Prematurely?
[ "Rishabh Ranjan", "Saurabh Garg", "Mrigank Raman", "Carlos Guestrin", "Zachary Chase Lipton" ]
NeurIPS.cc/2024/Conference
2404.07815
[ "https://github.com/rishabh-ranjan/post-hoc-reversal" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3PqhU96Vvv
@inproceedings{ govindarajan2024flexible, title={Flexible Context-Driven Sensory Processing in Dynamical Vision Models}, author={Lakshmi Narasimhan Govindarajan and Abhiram Iyer and Valmiki Kothare and Ila R Fiete}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3PqhU96Vvv} }
Visual representations become progressively more abstract along the cortical hierarchy. These abstract representations define notions like objects and shapes, but at the cost of spatial specificity. By contrast, low-level regions represent spatially local but simple input features. How do spatially non-specific representations of abstract concepts in high-level areas flexibly modulate the low-level sensory representations in appropriate ways to guide context-driven and goal-directed behaviors across a range of tasks? We build a biologically motivated and trainable neural network model of dynamics in the visual pathway, incorporating local, lateral, and feedforward synaptic connections, excitatory and inhibitory neurons, and long-range top-down inputs conceptualized as low-rank modulations of the input-driven sensory responses by high-level areas. We study this ${\bf D}$ynamical ${\bf C}$ortical ${\bf net}$work ($DCnet$) in a visual cue-delay-search task and show that the model uses its own cue representations to adaptively modulate its perceptual responses to solve the task, outperforming state-of-the-art DNN vision and LLM models. The model's population states over time shed light on the nature of contextual modulatory dynamics, generating predictions for experiments. We fine-tune the same model on classic psychophysics attention tasks, and find that the model closely replicates known reaction time results. This work represents a promising new foundation for understanding and making predictions about perturbations to visual processing in the brain.
Flexible Context-Driven Sensory Processing in Dynamical Vision Models
[ "Lakshmi Narasimhan Govindarajan", "Abhiram Iyer", "Valmiki Kothare", "Ila R Fiete" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3Odq2tGSpp
@inproceedings{ luo2024stylus, title={Stylus: Automatic Adapter Selection for Diffusion Models}, author={Michael Luo and Justin Wong and Brandon Trabucco and Yanping Huang and Joseph E. Gonzalez and Zhifeng Chen and Russ Salakhutdinov and Ion Stoica}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3Odq2tGSpp} }
Beyond scaling base models with more data or parameters, fine-tuned adapters provide an alternative way to generate high fidelity, custom images at reduced costs. As such, adapters have been widely adopted by open-source communities, accumulating a database of over 100K adapters—most of which are highly customized with insufficient descriptions. To generate high quality images, this paper explores the problem of matching the prompt to a Stylus of relevant adapters, built on recent work that highlight the performance gains of composing adapters. We introduce Stylus, which efficiently selects and automatically composes task-specific adapters based on a prompt's keywords. Stylus outlines a three-stage approach that first summarizes adapters with improved descriptions and embeddings, retrieves relevant adapters, and then further assembles adapters based on prompts' keywords by checking how well they fit the prompt. To evaluate Stylus, we developed StylusDocs, a curated dataset featuring 75K adapters with pre-computed adapter embeddings. In our evaluation on popular Stable Diffusion checkpoints, Stylus achieves greater CLIP/FID Pareto efficiency and is twice as preferred, with humans and multimodal models as evaluators, over the base model.
Stylus: Automatic Adapter Selection for Diffusion Models
[ "Michael Luo", "Justin Wong", "Brandon Trabucco", "Yanping Huang", "Joseph E. Gonzalez", "Zhifeng Chen", "Russ Salakhutdinov", "Ion Stoica" ]
NeurIPS.cc/2024/Conference
2404.18928
[ "" ]
https://huggingface.co/papers/2404.18928
5
14
1
8
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=3O5YCEWETq
@inproceedings{ ekambaram2024tiny, title={Tiny Time Mixers ({TTM}s): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series}, author={Vijay Ekambaram and Arindam Jati and Pankaj Dayama and Sumanta Mukherjee and Nam H Nguyen and Wesley M. Gifford and Chandra Reddy and Jayant Kalagnanam}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3O5YCEWETq} }
Large pre-trained models excel in zero/few-shot learning for language and vision tasks but face challenges in multivariate time series (TS) forecasting due to diverse data characteristics. Consequently, recent research efforts have focused on developing pre-trained TS forecasting models. These models, whether built from scratch or adapted from large language models (LLMs), excel in zero/few-shot forecasting tasks. However, they are limited by slow performance, high computational demands, and neglect of cross-channel and exogenous correlations. To address this, we introduce Tiny Time Mixers (TTM), a compact model (starting from 1M parameters) with effective transfer learning capabilities, trained exclusively on public TS datasets. TTM, based on the light-weight TSMixer architecture, incorporates innovations like adaptive patching, diverse resolution sampling, and resolution prefix tuning to handle pre-training on varied dataset resolutions with minimal model capacity. Additionally, it employs multi-level modeling to capture channel correlations and infuse exogenous signals during fine-tuning. TTM outperforms existing popular benchmarks in zero/few-shot forecasting by (4-40\%), while reducing computational requirements significantly. Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider adoption in resource-constrained environments. The model weights for reproducibility and research use are available at https://huggingface.co/ibm/ttm-research-r2/, while enterprise-use weights under the Apache license can be accessed as follows: the initial TTM-Q variant at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1, and the latest variants (TTM-B, TTM-E, TTM-A) weights are available at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2. The source code for the TTM model along with the usage scripts are available at https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/tinytimemixer
Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series
[ "Vijay Ekambaram", "Arindam Jati", "Pankaj Dayama", "Sumanta Mukherjee", "Nam H Nguyen", "Wesley M. Gifford", "Chandra Reddy", "Jayant Kalagnanam" ]
NeurIPS.cc/2024/Conference
2401.03955
[ "https://github.com/ibm-granite/granite-tsfm" ]
https://huggingface.co/papers/2401.03955
1
6
0
7
[ "ibm-granite/granite-timeseries-ttm-r1", "ibm-granite/granite-timeseries-ttm-r2", "ibm/ttm-research-r2" ]
[]
[]
[ "ibm-granite/granite-timeseries-ttm-r1", "ibm-granite/granite-timeseries-ttm-r2", "ibm/ttm-research-r2" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=3NaqGg92KZ
@inproceedings{ bae2024training, title={Training Data Attribution via Approximate Unrolling}, author={Juhan Bae and Wu Lin and Jonathan Lorraine and Roger Baker Grosse}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3NaqGg92KZ} }
Many training data attribution (TDA) methods aim to estimate how a model's behavior would change if one or more data points were removed from the training set. Methods based on implicit differentiation, such as influence functions, can be made computationally efficient, but fail to account for underspecification, the implicit bias of the optimization algorithm, or multi-stage training pipelines. By contrast, methods based on unrolling address these issues but face scalability challenges. In this work, we connect the implicit-differentiation-based and unrolling-based approaches and combine their benefits by introducing Source, an approximate unrolling-based TDA method that is computed using an influence-function-like formula. While being computationally efficient compared to unrolling-based approaches, Source is suitable in cases where implicit-differentiation-based approaches struggle, such as in non-converged models and multi-stage training pipelines. Empirically, Source outperforms existing TDA techniques in counterfactual prediction, especially in settings where implicit-differentiation-based approaches fall short.
Training Data Attribution via Approximate Unrolling
[ "Juhan Bae", "Wu Lin", "Jonathan Lorraine", "Roger Baker Grosse" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3NAEowLh7Q
@inproceedings{ wu2024opengaussian, title={OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding}, author={Yanmin Wu and Jiarui Meng and Haijie LI and Chenming Wu and Yahao Shi and Xinhua Cheng and Chen Zhao and Haocheng Feng and Errui Ding and Jingdong Wang and Jian Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3NAEowLh7Q} }
This paper introduces OpenGaussian, a method based on 3D Gaussian Splatting (3DGS) that possesses the capability for 3D point-level open vocabulary understanding. Our primary motivation stems from observing that existing 3DGS-based open vocabulary methods mainly focus on 2D pixel-level parsing. These methods struggle with 3D point-level tasks due to weak feature expressiveness and inaccurate 2D-3D feature associations. To ensure robust feature presentation and 3D point-level understanding, we first employ SAM masks without cross-frame associations to train instance features with 3D consistency. These features exhibit both intra-object consistency and inter-object distinction. Then, we propose a two-stage codebook to discretize these features from coarse to fine levels. At the coarse level, we consider the positional information of 3D points to achieve location-based clustering, which is then refined at the fine level. Finally, we introduce an instance-level 3D-2D feature association method that links 3D points to 2D masks, which are further associated with 2D CLIP features. Extensive experiments, including open vocabulary-based 3D object selection, 3D point cloud understanding, click-based 3D object selection, and ablation studies, demonstrate the effectiveness of our proposed method. The source code is available at our project page https://3d-aigc.github.io/OpenGaussian.
OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding
[ "Yanmin Wu", "Jiarui Meng", "Haijie LI", "Chenming Wu", "Yahao Shi", "Xinhua Cheng", "Chen Zhao", "Haocheng Feng", "Errui Ding", "Jingdong Wang", "Jian Zhang" ]
NeurIPS.cc/2024/Conference
2406.02058
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3MnXAcTBD3
@inproceedings{ you2024bary, title={B-ary Tree Push-Pull Method is Provably Efficient for Distributed Learning on Heterogeneous Data}, author={Runze You and Shi Pu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3MnXAcTBD3} }
This paper considers the distributed learning problem where a group of agents cooperatively minimizes the summation of their local cost functions based on peer-to-peer communication. Particularly, we propose a highly efficient algorithm, termed ``B-ary Tree Push-Pull'' (BTPP), that employs two B-ary spanning trees for distributing the information related to the parameters and stochastic gradients across the network. The simple method is efficient in communication since each agent interacts with at most $(B+1)$ neighbors per iteration. More importantly, BTPP achieves linear speedup for smooth nonconvex objective functions with only $\tilde{O}(n)$ transient iterations, significantly outperforming the state-of-the-art results to the best of our knowledge.
B-ary Tree Push-Pull Method is Provably Efficient for Distributed Learning on Heterogeneous Data
[ "Runze You", "Shi Pu" ]
NeurIPS.cc/2024/Conference
2404.05454
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3MW44iNdrD
@inproceedings{ teo2024fairqueue, title={FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation}, author={Christopher T.H Teo and Milad Abdollahzadeh and Xinda Ma and Ngai-man Cheung}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3MW44iNdrD} }
Recently, prompt learning has emerged as the state-of-the-art (SOTA) for fair text-to-image (T2I) generation. Specifically, this approach leverages readily available reference images to learn inclusive prompts for each target Sensitive Attribute (tSA), allowing for fair image generation. In this work, we first reveal that this prompt learning-based approach results in degraded sample quality. Our analysis shows that the approach's training objective--which aims to align the embedding differences of learned prompts and reference images-- could be sub-optimal, resulting in distortion of the learned prompts and degraded generated images. To further substantiate this claim, **as our major contribution**, we deep dive into the denoising subnetwork of the T2I model to track down the effect of these learned prompts by analyzing the cross-attention maps. In our analysis, we propose a novel prompt switching analysis: I2H and H2I. Furthermore, we propose new quantitative characterization of cross-attention maps. Our analysis reveals abnormalities in the early denoising steps, perpetuating improper global structure that results in degradation in the generated samples. Building on insights from our analysis, we propose two ideas: (i) *Prompt Queuing* and (ii) *Attention Amplification* to address the quality issue. Extensive experimental results on a wide range of tSAs show that our proposed method outperforms SOTA approach's image generation quality, while achieving competitive fairness. More resources at FairQueue Project site: https://sutd-visual-computing-group.github.io/FairQueue
FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation
[ "Christopher T.H Teo", "Milad Abdollahzadeh", "Xinda Ma", "Ngai-man Cheung" ]
NeurIPS.cc/2024/Conference
2410.18615
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3LZHatxUa9
@inproceedings{ zhu2024on, title={On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks}, author={Jiong Zhu and Gaotang Li and Yao-An Yang and Jing Zhu and Xuehao Cui and Danai Koutra}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3LZHatxUa9} }
Heterophily, or the tendency of connected nodes in networks to have different class labels or dissimilar features, has been identified as challenging for many Graph Neural Network (GNN) models. While the challenges of applying GNNs for node classification when class labels display strong heterophily are well understood, it is unclear how heterophily affects GNN performance in other important graph learning tasks where class labels are not available. In this work, we focus on the link prediction task and systematically analyze the impact of heterophily in node features on GNN performance. We first introduce formal definitions of homophilic and heterophilic link prediction tasks, and present a theoretical framework that highlights the different optimizations needed for the respective tasks. We then analyze how different link prediction encoders and decoders adapt to varying levels of feature homophily and introduce designs for improved performance. Based on our definitions, we identify and analyze six real-world benchmarks spanning from homophilic to heterophilic link prediction settings, with graphs containing up to 30M edges. Our empirical analysis on a variety of synthetic and real-world datasets confirms our theoretical insights and highlights the importance of adopting learnable decoders and GNN encoders with ego- and neighbor-embedding separation in message passing for link prediction tasks beyond homophily.
On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks
[ "Jiong Zhu", "Gaotang Li", "Yao-An Yang", "Jing Zhu", "Xuehao Cui", "Danai Koutra" ]
NeurIPS.cc/2024/Conference
2409.17475
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3LKuC8rbyV
@inproceedings{ chien2024langevin, title={Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning}, author={Eli Chien and Haoyu Peter Wang and Ziang Chen and Pan Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3LKuC8rbyV} }
Machine unlearning has raised significant interest with the adoption of laws ensuring the ``right to be forgotten''. Researchers have provided a probabilistic notion of approximate unlearning under a similar definition of Differential Privacy (DP), where privacy is defined as statistical indistinguishability to retraining from scratch. We propose Langevin unlearning, an unlearning framework based on noisy gradient descent with privacy guarantees for approximate unlearning problems. Langevin unlearning unifies the DP learning process and the privacy-certified unlearning process with many algorithmic benefits. These include approximate certified unlearning for non-convex problems, complexity saving compared to retraining, sequential and batch unlearning for multiple unlearning requests.
Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning
[ "Eli Chien", "Haoyu Peter Wang", "Ziang Chen", "Pan Li" ]
NeurIPS.cc/2024/Conference
2401.10371
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=3JwMwL8i5f
@inproceedings{ liu2024alleviating, title={Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models and Time-Dependent Layer Normalization}, author={Qihao Liu and Zhanpeng Zeng and Ju He and Qihang Yu and Xiaohui Shen and Liang-Chieh Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3JwMwL8i5f} }
This paper presents innovative enhancements to diffusion models by integrating a novel multi-resolution network and time-dependent layer normalization. Diffusion models have gained prominence for their effectiveness in high-fidelity image generation. While conventional approaches rely on convolutional U-Net architectures, recent Transformer-based designs have demonstrated superior performance and scalability. However, Transformer architectures, which tokenize input data (via "patchification"), face a trade-off between visual fidelity and computational complexity due to the quadratic nature of self-attention operations concerning token length. While larger patch sizes enable attention computation efficiency, they struggle to capture fine-grained visual details, leading to image distortions. To address this challenge, we propose augmenting the **Di**ffusion model with the **M**ulti-**R**esolution network (DiMR), a framework that refines features across multiple resolutions, progressively enhancing detail from low to high resolution. Additionally, we introduce Time-Dependent Layer Normalization (TD-LN), a parameter-efficient approach that incorporates time-dependent parameters into layer normalization to inject time information and achieve superior performance. Our method's efficacy is demonstrated on the class-conditional ImageNet generation benchmark, where DiMR-XL variants surpass previous diffusion models, achieving FID scores of 1.70 on ImageNet $256 \times 256$ and 2.89 on ImageNet $512 \times 512$. Our best variant, DiMR-G, further establishes a state-of-the-art 1.63 FID on ImageNet $256 \times 256$.
Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models and Time-Dependent Layer Normalization
[ "Qihao Liu", "Zhanpeng Zeng", "Ju He", "Qihang Yu", "Xiaohui Shen", "Liang-Chieh Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3J5hvO5UaW
@inproceedings{ cyffers2024optimal, title={Optimal Classification under Performative Distribution Shift}, author={Edwige Cyffers and Muni Sreenivas Pydi and Jamal Atif and Olivier Capp{\'e}}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3J5hvO5UaW} }
Performative learning addresses the increasingly pervasive situations in which algorithmic decisions may induce changes in the data distribution as a consequence of their public deployment. We propose a novel view in which these performative effects are modelled as push forward measures. This general framework encompasses existing models and enables novel performative gradient estimation methods, leading to more efficient and scalable learning strategies. For distribution shifts, unlike previous models which require full specification of the data distribution, we only assume knowledge of the shift operator that represents the performative changes. This approach can also be integrated into various change-of-variable-based models, such as VAEs or normalizing flows. Focusing on classification with a linear-in-parameters performative effect, we prove the convexity of the performative risk under a new set of assumptions. Notably, we do not limit the strength of performative effects but rather their direction, requiring only that classification becomes harder when deploying more accurate models. In this case, we also establish a connection with adversarially robust classification by reformulating the performative risk as a min-max variational problem. Finally, we illustrate our approach on synthetic and real datasets.
Optimal Classification under Performative Distribution Shift
[ "Edwige Cyffers", "Muni Sreenivas Pydi", "Jamal Atif", "Olivier Cappé" ]
NeurIPS.cc/2024/Conference
2411.02023
[ "https://github.com/totilas/PerfOpti" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3HpgVs22UJ
@inproceedings{ kim2024adaptive, title={Adaptive \$Q\$-Aid for Conditional Supervised Learning in Offline Reinforcement Learning}, author={Jeonghye Kim and Suyoung Lee and Woojun Kim and Youngchul Sung}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3HpgVs22UJ} }
Offline reinforcement learning (RL) has progressed with return-conditioned supervised learning (RCSL), but its lack of stitching ability remains a limitation. We introduce $Q$-Aided Conditional Supervised Learning (QCS), which effectively combines the stability of RCSL with the stitching capability of $Q$-functions. By analyzing $Q$-function over-generalization, which impairs stable stitching, QCS adaptively integrates $Q$-aid into RCSL's loss function based on trajectory return. Empirical results show that QCS significantly outperforms RCSL and value-based methods, consistently achieving or exceeding the highest trajectory returns across diverse offline RL benchmarks. QCS represents a breakthrough in offline RL, pushing the limits of what can be achieved and fostering further innovations.
Adaptive Q-Aid for Conditional Supervised Learning in Offline Reinforcement Learning
[ "Jeonghye Kim", "Suyoung Lee", "Woojun Kim", "Youngchul Sung" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3HpCVZV9it
@inproceedings{ furuta2024geometricaveraged, title={Geometric-Averaged Preference Optimization for Soft Preference Labels}, author={Hiroki Furuta and Kuang-Huei Lee and Shixiang Shane Gu and Yutaka Matsuo and Aleksandra Faust and Heiga Zen and Izzeddin Gur}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3HpCVZV9it} }
Many algorithms for aligning LLMs with human preferences assume that human preferences are binary and deterministic. However, human preferences can vary across individuals, and therefore should be represented distributionally. In this work, we introduce the distributional soft preference labels and improve Direct Preference Optimization (DPO) with a weighted geometric average of the LLM output likelihood in the loss function. This approach adjusts the scale of learning loss based on the soft labels such that the loss would approach zero when the responses are closer to equally preferred. This simple modification can be easily applied to any DPO-based methods and mitigate over-optimization and objective mismatch, which prior works suffer from. Our experiments simulate the soft preference labels with AI feedback from LLMs and demonstrate that geometric averaging consistently improves performance on standard benchmarks for alignment research. In particular, we observe more preferable responses than binary labels and significant improvements where modestly-confident labels are in the majority.
Geometric-Averaged Preference Optimization for Soft Preference Labels
[ "Hiroki Furuta", "Kuang-Huei Lee", "Shixiang Shane Gu", "Yutaka Matsuo", "Aleksandra Faust", "Heiga Zen", "Izzeddin Gur" ]
NeurIPS.cc/2024/Conference
2409.06691
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3H1wqEdK4z
@inproceedings{ zheng2024enhancing, title={Enhancing Large Language Models through Adaptive Tokenizers}, author={Mengyu Zheng and Hanting Chen and Tianyu Guo and Chong Zhu and Binfan Zheng and Chang Xu and Yunhe Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3H1wqEdK4z} }
Tokenizers serve as crucial interfaces between models and linguistic data, substantially influencing the efficacy and precision of large language models (LLMs). Traditional tokenization methods often rely on static frequency-based statistics and are not inherently synchronized with LLM architectures, which may limit model performance. In this study, we propose a simple but effective method to learn tokenizers specifically engineered for seamless integration with LLMs. Initiating with a broad initial vocabulary, we refine our tokenizer by monitoring changes in the model’s perplexity during training, allowing for the selection of a tokenizer that is closely aligned with the model’s evolving dynamics. Through iterative refinement, we develop an optimized tokenizer. Our empirical evaluations demonstrate that this adaptive approach significantly enhances accuracy compared to conventional methods, maintaining comparable vocabulary sizes and affirming its potential to improve LLM functionality.
Enhancing Large Language Models through Adaptive Tokenizers
[ "Mengyu Zheng", "Hanting Chen", "Tianyu Guo", "Chong Zhu", "Binfan Zheng", "Chang Xu", "Yunhe Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3G8sjUZqO3
@inproceedings{ chen2024multidimensional, title={Multidimensional Fractional Programming for Normalized Cuts}, author={Yannan Chen and Beichen Huang and Licheng Zhao and Kaiming Shen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3G8sjUZqO3} }
The Normalized cut (NCut) problem is a fundamental and yet notoriously difficult one in the unsupervised clustering field. Because the NCut problem is fractionally structured, the fractional programming (FP) based approach has worked its way into a new frontier. However, the conventional FP techniques are insufficient: the classic Dinkelbach's transform can only deal with a single ratio and hence is limited to the two-class clustering, while the state-of-the-art quadratic transform accounts for multiple ratios but fails to convert the NCut problem to a tractable form. This work advocates a novel extension of the quadratic transform to the multidimensional ratio case, thereby recasting the fractional 0-1 NCut problem into a bipartite matching problem---which can be readily solved in an iterative manner. Furthermore, we explore the connection between the proposed multidimensional FP method and the minorization-maximization theory to verify the convergence.
Multidimensional Fractional Programming for Normalized Cuts
[ "Yannan Chen", "Beichen Huang", "Licheng Zhao", "Kaiming Shen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3EREVfwALz
@inproceedings{ hanneke2024multiclass, title={Multiclass Transductive Online Learning}, author={Steve Hanneke and Vinod Raman and Amirreza Shaeiri and Unique Subedi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3EREVfwALz} }
We consider the problem of multiclass transductive online learning when the number of labels can be unbounded. Previous works by Ben-David et al. [1997] and Hanneke et al. [2024] only consider the case of binary and finite label spaces respectively. The latter work determined that their techniques fail to extend to the case of unbounded label spaces, and they pose the question of characterizing the optimal mistake bound for unbounded label spaces. We answer this question, by showing that a new dimension, termed the Level-constrained Littlestone dimension, characterizes online learnability in this setting. Along the way, we show that the trichotomy of possible minimax rates established by Hanneke et al. [2024] for finite label spaces in the realizable setting continues to hold even when the label space is unbounded. In particular, if the learner plays for $T \in \mathbb{N}$ rounds, its minimax expected number of mistakes can only grow like $\Theta(T)$, $\Theta(\log T)$, or $\Theta(1)$. To prove this result, we give another combinatorial dimension, termed the Level-constrained Branching dimension, and show that its finiteness characterizes constant minimax expected mistake-bounds. The trichotomy is then determined by a combination of the Level-constrained Littlestone and Branching dimensions. Quantitatively, our upper bounds improve upon existing multiclass upper bounds in Hanneke et al. [2024] by removing the dependence on the label set size. In doing so, we explicitly construct learning algorithms that can handle extremely large or unbounded label spaces. A key component of our algorithm is a new notion of shattering that exploits the sequential nature of transductive online learning. Finally, we complete our results by proving expected regret bounds in the agnostic setting, extending the result of Hanneke et al. [2024].
Multiclass Transductive Online Learning
[ "Steve Hanneke", "Vinod Raman", "Amirreza Shaeiri", "Unique Subedi" ]
NeurIPS.cc/2024/Conference
2411.01634
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=3Ds5vNudIE
@inproceedings{ tigges2024llm, title={{LLM} Circuit Analyses Are Consistent Across Training and Scale}, author={Curt Tigges and Michael Hanna and Qinan Yu and Stella Biderman}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3Ds5vNudIE} }
Most currently deployed LLMs undergo continuous training or additional finetuning. By contrast, most research into LLMs' internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the question of whether their results generalize to real-world settings. Existing studies of mechanisms over time focus on encoder-only or toy models, which differ significantly from most deployed models. In this study, we track how model mechanisms, operationalized as circuits, emerge and evolve across 300 billion tokens of training in decoder-only LLMs, in models ranging from 70 million to 2.8 billion parameters. We find that task abilities and the functional components that support them emerge consistently at similar token counts across scale. Moreover, although such components may be implemented by different attention heads over time, the overarching algorithm that they implement remains. Surprisingly, both these algorithms and the types of components involved therein tend to replicate across model scale. Finally, we find that circuit size correlates with model size and can fluctuate considerably over time even when the same algorithm is implemented. These results suggest that circuit analyses conducted on small models at the end of pre-training can provide insights that still apply after additional training and over model scale.
LLM Circuit Analyses Are Consistent Across Training and Scale
[ "Curt Tigges", "Michael Hanna", "Qinan Yu", "Stella Biderman" ]
NeurIPS.cc/2024/Conference
2407.10827
[ "" ]
https://huggingface.co/papers/2407.10827
1
4
2
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=3CweLZFNyl
@inproceedings{ hu2024expressive, title={Expressive Gaussian Human Avatars from Monocular {RGB} Video}, author={Hezhen Hu and Zhiwen Fan and Tianhao Walter Wu and Yihan Xi and Seoyoung Lee and Georgios Pavlakos and Zhangyang Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3CweLZFNyl} }
Nuanced expressiveness, especially through detailed hand and facial expressions, is pivotal for enhancing the realism and vitality of digital human representations. In this work, we aim to learn expressive human avatars from a monocular RGB video; a setting that introduces new challenges in capturing and animating fine-grained details. To this end, we introduce EVA, a drivable human model that can recover fine details based on 3D Gaussians and an expressive parametric human model, SMPL-X. Focused on enhancing expressiveness, our work makes three key contributions. First, we highlight the importance of aligning the SMPL-X model with the video frames for effective avatar learning. Recognizing the limitations of current methods for estimating SMPL-X parameters from in-the-wild videos, we introduce a reconstruction module that significantly improves the image-model alignment. Second, we propose a context-aware adaptive density control strategy, which is adaptively adjusting the gradient thresholds to accommodate the varied granularity across body parts. Third, we develop a feedback mechanism that predicts per-pixel confidence to better guide the optimization of 3D Gaussians. Extensive experiments on two benchmarks demonstrate the superiority of our approach both quantitatively and qualitatively, especially on the fine-grained hand and facial details. We make our code available at the project website: https://evahuman.github.io.
Expressive Gaussian Human Avatars from Monocular RGB Video
[ "Hezhen Hu", "Zhiwen Fan", "Tianhao Walter Wu", "Yihan Xi", "Seoyoung Lee", "Georgios Pavlakos", "Zhangyang Wang" ]
NeurIPS.cc/2024/Conference
2407.03204
[ "" ]
https://huggingface.co/papers/2407.03204
0
1
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=3CtTMF5zzM
@inproceedings{ cai2024on, title={On Tractable \${\textbackslash}Phi\$-Equilibria in Non-Concave Games}, author={Yang Cai and Constantinos Costis Daskalakis and Haipeng Luo and Chen-Yu Wei and Weiqiang Zheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3CtTMF5zzM} }
While Online Gradient Descent and other no-regret learning procedures are known to efficiently converge to a coarse correlated equilibrium in games where each agent's utility is concave in their own strategy, this is not the case when utilities are non-concave -- a common scenario in machine learning applications involving strategies parameterized by deep neural networks, or when agents' utilities are computed by neural networks, or both. Non-concave games introduce significant game-theoretic and optimization challenges: (i) Nash equilibria may not exist; (ii) local Nash equilibria, though they exist, are intractable; and (iii) mixed Nash, correlated, and coarse correlated equilibria generally have infinite support and are intractable. To sidestep these challenges, we revisit the classical solution concept of $\Phi$-equilibria introduced by Greenwald and Jafari [GJ03], which is guaranteed to exist for an arbitrary set of strategy modifications $\Phi$ even in non-concave games [SL07]. However, the tractability of $\Phi$-equilibria in such games remains elusive. In this paper, we initiate the study of tractable $\Phi$-equilibria in non-concave games and examine several natural families of strategy modifications. We show that when $\Phi$ is finite, there exists an efficient uncoupled learning algorithm that approximates the corresponding $\Phi$-equilibria. Additionally, we explore cases where $\Phi$ is infinite but consists of local modifications, showing that Online Gradient Descent can efficiently approximate $\Phi$-equilibria in non-trivial regimes.
On Tractable Φ-Equilibria in Non-Concave Games
[ "Yang Cai", "Constantinos Costis Daskalakis", "Haipeng Luo", "Chen-Yu Wei", "Weiqiang Zheng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3BNPUDvqMt
@inproceedings{ holzm{\"u}ller2024better, title={Better by default: Strong pre-tuned {MLP}s and boosted trees on tabular data}, author={David Holzm{\"u}ller and Leo Grinsztajn and Ingo Steinwart}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3BNPUDvqMt} }
For classification and regression on tabular data, the dominance of gradient-boosted decision trees (GBDTs) has recently been challenged by often much slower deep learning methods with extensive hyperparameter tuning. We address this discrepancy by introducing (a) RealMLP, an improved multilayer perceptron (MLP), and (b) strong meta-tuned default parameters for GBDTs and RealMLP. We tune RealMLP and the default parameters on a meta-train benchmark with 118 datasets and compare them to hyperparameter-optimized versions on a disjoint meta-test benchmark with 90 datasets, as well as the GBDT-friendly benchmark by Grinsztajn et al. (2022). Our benchmark results on medium-to-large tabular datasets (1K--500K samples) show that RealMLP offers a favorable time-accuracy tradeoff compared to other neural baselines and is competitive with GBDTs in terms of benchmark scores. Moreover, a combination of RealMLP and GBDTs with improved default parameters can achieve excellent results without hyperparameter tuning. Finally, we demonstrate that some of RealMLP's improvements can also considerably improve the performance of TabR with default parameters.
Better by default: Strong pre-tuned MLPs and boosted trees on tabular data
[ "David Holzmüller", "Leo Grinsztajn", "Ingo Steinwart" ]
NeurIPS.cc/2024/Conference
2407.04491
[ "https://github.com/dholzmueller/pytabkit" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3ADBiWNUBb
@inproceedings{ froehlich2024graph, title={Graph Structure Inference with {BAM}: Neural Dependency Processing via Bilinear Attention}, author={Philipp Froehlich and Heinz Koeppl}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3ADBiWNUBb} }
Detecting dependencies among variables is a fundamental task across scientific disciplines. We propose a novel neural network model for graph structure inference, which aims to learn a mapping from observational data to the corresponding underlying dependence structures. The model is trained with variably shaped and coupled simulated input data and requires only a single forward pass through the trained network for inference. Central to our approach is a novel bilinear attention mechanism (BAM) operating on covariance matrices of transformed data while respecting the geometry of the manifold of symmetric positive definite (SPD) matrices. Inspired by graphical lasso methods, our model optimizes over continuous graph representations in the SPD space, where inverse covariance matrices encode conditional independence relations. Empirical evaluations demonstrate the robustness of our method in detecting diverse dependencies, excelling in undirected graph estimation and showing competitive performance in completed partially directed acyclic graph estimation via a novel two-step approach. The trained model effectively detects causal relationships and generalizes well across different functional forms of nonlinear dependencies.
Graph Structure Inference with BAM: Neural Dependency Processing via Bilinear Attention
[ "Philipp Froehlich", "Heinz Koeppl" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3ACXaFxjTy
@inproceedings{ zhu2024unleashing, title={Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation}, author={Muzhi Zhu and Yang Liu and Zekai Luo and Chenchen Jing and Hao Chen and Guangkai Xu and Xinlong Wang and Chunhua Shen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3ACXaFxjTy} }
The Diffusion Model has not only garnered noteworthy achievements in the realm of image generation but has also demonstrated its potential as an effective pretraining method utilizing unlabeled data. Drawing from the extensive potential unveiled by the Diffusion Model in both semantic correspondence and open vocabulary segmentation, our work initiates an investigation into employing the Latent Diffusion Model for Few-shot Semantic Segmentation. Recently, inspired by the in-context learning ability of large language models, Few-shot Semantic Segmentation has evolved into In-context Segmentation tasks, morphing into a crucial element in assessing generalist segmentation models. In this context, we concentrate on Few-shot Semantic Segmentation, establishing a solid foundation for the future development of a Diffusion-based generalist model for segmentation. Our initial focus lies in understanding how to facilitate interaction between the query image and the support image, resulting in the proposal of a KV fusion method within the self-attention framework. Subsequently, we delve deeper into optimizing the infusion of information from the support mask and simultaneously re-evaluating how to provide reasonable supervision from the query mask. Based on our analysis, we establish a simple and effective framework named DiffewS, maximally retaining the original Latent Diffusion Model's generative framework and effectively utilizing the pre-training prior. Experimental results demonstrate that our method significantly outperforms the previous SOTA models in multiple settings.
Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation
[ "Muzhi Zhu", "Yang Liu", "Zekai Luo", "Chenchen Jing", "Hao Chen", "Guangkai Xu", "Xinlong Wang", "Chunhua Shen" ]
NeurIPS.cc/2024/Conference
2410.02369
[ "https://github.com/aim-uofa/diffews" ]
https://huggingface.co/papers/2410.02369
1
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=3A5VgiH5Pw
@inproceedings{ hu2024towards, title={Towards Multi-dimensional Explanation Alignment for Medical Classification}, author={Lijie Hu and Songning Lai and Wenshuo Chen and Hongru Xiao and Hongbin Lin and Lu Yu and Jingfeng Zhang and Di Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=3A5VgiH5Pw} }
The lack of interpretability in the field of medical image analysis has significant ethical and legal implications. Existing interpretable methods in this domain encounter several challenges, including dependency on specific models, difficulties in understanding and visualization, and issues related to efficiency. To address these limitations, we propose a novel framework called Med-MICN (Medical Multi-dimensional Interpretable Concept Network). Med-MICN provides interpretability alignment for various angles, including neural symbolic reasoning, concept semantics, and saliency maps, which are superior to current interpretable methods. Its advantages include high prediction accuracy, interpretability across multiple dimensions, and automation through an end-to-end concept labeling process that reduces the need for extensive human training effort when working with new datasets. To demonstrate the effectiveness and interpretability of Med-MICN, we apply it to four benchmark datasets and compare it with baselines. The results clearly demonstrate the superior performance and interpretability of our Med-MICN.
Towards Multi-dimensional Explanation Alignment for Medical Classification
[ "Lijie Hu", "Songning Lai", "Wenshuo Chen", "Hongru Xiao", "Hongbin Lin", "Lu Yu", "Jingfeng Zhang", "Di Wang" ]
NeurIPS.cc/2024/Conference
2410.21494
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=38UFpdt3Tr
@inproceedings{ szatkowski2024exploiting, title={Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion}, author={Filip Szatkowski and Bartosz W{\'o}jcik and Miko{\l}aj Pi{\'o}rczy{\'n}ski and Simone Scardapane}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=38UFpdt3Tr} }
Transformer models can face practical limitations due to their high computational requirements. At the same time, such models exhibit significant activation sparsity, which can be leveraged to reduce the inference cost by converting parts of the network into equivalent Mixture-of-Experts (MoE) layers. Despite the crucial role played by activation sparsity, its impact on this process remains unexplored. We demonstrate that the efficiency of the conversion can be significantly enhanced by a proper regularization of the activation sparsity of the base model. Moreover, motivated by the high variance of the number of activated neurons for different inputs, we introduce a more effective dynamic-$k$ expert selection rule that adjusts the number of executed experts on a per-token basis. To achieve further savings, we extend this approach to multi-head attention projections. Finally, we develop an efficient implementation that translates these computational savings into actual wall-clock speedup. The proposed method, Dense to Dynamic-$k$ Mixture-of-Experts (D2DMoE), outperforms existing approaches on common NLP and vision tasks, reducing inference cost by up to 60\% without significantly impacting performance.
Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion
[ "Filip Szatkowski", "Bartosz Wójcik", "Mikołaj Piórczyński", "Simone Scardapane" ]
NeurIPS.cc/2024/Conference
2310.04361
[ "https://github.com/bartwojcik/d2dmoe" ]
https://huggingface.co/papers/2310.04361
1
1
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=37CyA1K0vV
@inproceedings{ xu2024aggregating, title={Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction}, author={Yixuan Even Xu and Hanrui Zhang and Yu Cheng and Vincent Conitzer}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=37CyA1K0vV} }
Quantitative Relative Judgment Aggregation (QRJA) is a new research topic in (computational) social choice. In the QRJA model, agents provide judgments on the relative quality of different candidates, and the goal is to aggregate these judgments across all agents. In this work, our main conceptual contribution is to explore the interplay between QRJA in a social choice context and its application to ranking prediction. We observe that in QRJA, judges do not have to be people with subjective opinions; for example, a race can be viewed as a ``judgment'' on the contestants' relative abilities. This allows us to aggregate results from multiple races to evaluate the contestants' true qualities. At a technical level, we introduce new aggregation rules for QRJA and study their structural and computational properties. We evaluate the proposed methods on data from various real races and show that QRJA-based methods offer effective and interpretable ranking predictions.
Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction
[ "Yixuan Even Xu", "Hanrui Zhang", "Yu Cheng", "Vincent Conitzer" ]
NeurIPS.cc/2024/Conference
2410.05550
[ "https://github.com/YixuanEvenXu/quantitative-judgment-aggregation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=36tMV15dPO
@inproceedings{ hu2024xray, title={X-Ray: A Sequential 3D Representation For Generation}, author={Tao Hu and Wenhang Ge and Yuyang Zhao and Gim Hee Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=36tMV15dPO} }
We introduce X-Ray, a novel 3D sequential representation inspired by the penetrability of x-ray scans. X-Ray transforms a 3D object into a series of surface frames at different layers, making it suitable for generating 3D models from images. Our method utilizes ray casting from the camera center to capture geometric and textured details, including depth, normal, and color, across all intersected surfaces. This process efficiently condenses the whole 3D object into a multi-frame video format, motivating the utilize of a network architecture similar to those in video diffusion models. This design ensures an efficient 3D representation by focusing solely on surface information. Also, we propose a two-stage pipeline to generate 3D objects from X-Ray Diffusion Model and Upsampler. We demonstrate the practicality and adaptability of our X-Ray representation by synthesizing the complete visible and hidden surfaces of a 3D object from a single input image. Experimental results reveal the state-of-the-art superiority of our representation in enhancing the accuracy of 3D generation, paving the way for new 3D representation research and practical applications. Our project page is in \url{https://tau-yihouxiang.github.io/projects/X-Ray/X-Ray.html}.
X-Ray: A Sequential 3D Representation For Generation
[ "Tao Hu", "Wenhang Ge", "Yuyang Zhao", "Gim Hee Lee" ]
NeurIPS.cc/2024/Conference
2404.14329
[ "https://github.com/tau-yihouxiang/X-Ray" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=35WwZhkush
@inproceedings{ zhang2024betterdepth, title={BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation}, author={Xiang Zhang and Bingxin Ke and Hayko Riemenschneider and Nando Metzger and Anton Obukhov and Markus Gross and Konrad Schindler and Christopher Schroers}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=35WwZhkush} }
By training over large-scale datasets, zero-shot monocular depth estimation (MDE) methods show robust performance in the wild but often suffer from insufficient detail. Although recent diffusion-based MDE approaches exhibit a superior ability to extract details, they struggle in geometrically complex scenes that challenge their geometry prior, trained on less diverse 3D data. To leverage the complementary merits of both worlds, we propose BetterDepth to achieve geometrically correct affine-invariant MDE while capturing fine details. Specifically, BetterDepth is a conditional diffusion-based refiner that takes the prediction from pre-trained MDE models as depth conditioning, in which the global depth layout is well-captured, and iteratively refines details based on the input image. For the training of such a refiner, we propose global pre-alignment and local patch masking methods to ensure BetterDepth remains faithful to the depth conditioning while learning to add fine-grained scene details. With efficient training on small-scale synthetic datasets, BetterDepth achieves state-of-the-art zero-shot MDE performance on diverse public datasets and on in-the-wild scenes. Moreover, BetterDepth can improve the performance of other MDE models in a plug-and-play manner without further re-training.
BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation
[ "Xiang Zhang", "Bingxin Ke", "Hayko Riemenschneider", "Nando Metzger", "Anton Obukhov", "Markus Gross", "Konrad Schindler", "Christopher Schroers" ]
NeurIPS.cc/2024/Conference
2407.17952
[ "" ]
https://huggingface.co/papers/2407.17952
5
29
4
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=35DAviqMFo
@inproceedings{ du2024understanding, title={Understanding Emergent Abilities of Language Models from the Loss Perspective}, author={Zhengxiao Du and Aohan Zeng and Yuxiao Dong and Jie Tang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=35DAviqMFo} }
Recent studies have put into question the belief that emergent abilities in language models are exclusive to large models. This skepticism arises from two observations: 1) smaller models can also exhibit high performance on emergent abilities and 2) there is doubt on the discontinuous metrics used to measure these abilities. In this paper, we propose to study emergent abilities in the lens of pre-training loss, instead of model size or training compute. We demonstrate that the models with the same pre-training loss, but different model and data sizes, generate the same performance on various downstream tasks. We also discover that a model exhibits emergent abilities on certain tasks---regardless of the continuity of metrics---when its pre-training loss falls below a specific threshold. Before reaching this threshold, its performance remains at the level of random guessing. This inspires us to redefine emergent abilities as those that manifest in models with lower pre-training losses, highlighting that these abilities cannot be predicted by merely extrapolating the performance trends of models with higher pre-training losses.
Understanding Emergent Abilities of Language Models from the Loss Perspective
[ "Zhengxiao Du", "Aohan Zeng", "Yuxiao Dong", "Jie Tang" ]
NeurIPS.cc/2024/Conference
2403.15796
[ "" ]
https://huggingface.co/papers/2403.15796
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=348hfcprUs
@inproceedings{ sun2024fast, title={Fast Best-of-N Decoding via Speculative Rejection}, author={Hanshi Sun and Momin Haider and Ruiqi Zhang and Huitao Yang and Jiahao Qiu and Ming Yin and Mengdi Wang and Peter Bartlett and Andrea Zanette}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=348hfcprUs} }
The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient.
Fast Best-of-N Decoding via Speculative Rejection
[ "Hanshi Sun", "Momin Haider", "Ruiqi Zhang", "Huitao Yang", "Jiahao Qiu", "Ming Yin", "Mengdi Wang", "Peter Bartlett", "Andrea Zanette" ]
NeurIPS.cc/2024/Conference
2410.20290
[ "https://github.com/Zanette-Labs/SpeculativeRejection" ]
https://huggingface.co/papers/2410.20290
1
9
2
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=347aDObXEa
@inproceedings{ sun2024geometry, title={Geometry Awakening: Cross-Geometry Learning Exhibits Superiority over Individual Structures}, author={Yadong Sun and Xiaofeng Cao and Yu Wang and Wei Ye and Jingcai Guo and Qing Guo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=347aDObXEa} }
Recent research has underscored the efficacy of Graph Neural Networks (GNNs) in modeling diverse geometric structures within graph data. However, real-world graphs typically exhibit geometrically heterogeneous characteristics, rendering the confinement to a single geometric paradigm insufficient for capturing their intricate structural complexities. To address this limitation, we examine the performance of GNNs across various geometries through the lens of knowledge distillation (KD) and introduce a novel cross-geometric framework. This framework encodes graphs by integrating both Euclidean and hyperbolic geometries in a space-mixing fashion. Our approach employs multiple teacher models, each generating hint embeddings that encapsulate distinct geometric properties. We then implement a structure-wise knowledge transfer module that optimally leverages these embeddings within their respective geometric contexts, thereby enhancing the training efficacy of the student model. Additionally, our framework incorporates a geometric optimization network designed to bridge the distributional disparities among these embeddings. Experimental results demonstrate that our model-agnostic framework more effectively captures topological graph knowledge, resulting in superior performance of the student models when compared to traditional KD methodologies.
Geometry Awakening: Cross-Geometry Learning Exhibits Superiority over Individual Structures
[ "Yadong Sun", "Xiaofeng Cao", "Yu Wang", "Wei Ye", "Jingcai Guo", "Qing Guo" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=337dHOexCM
@inproceedings{ thomas2024retrieval, title={Retrieval \& Fine-Tuning for In-Context Tabular Models}, author={Valentin Thomas and Junwei Ma and Rasa Hosseinzadeh and Keyvan Golestan and Guangwei Yu and Maksims Volkovs and Anthony L. Caterini}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=337dHOexCM} }
Tabular data is a pervasive modality spanning a wide range of domains, and this inherent diversity poses a considerable challenge for deep learning. Recent advancements using transformer-based in-context learning have shown promise on smaller and less complex tabular datasets, but have struggled to scale to larger and more complex ones. To address this limitation, we propose a combination of retrieval and fine-tuning: we can adapt the transformer to a local subset of the data by collecting nearest neighbours, and then perform task-specific fine-tuning with this retrieved set of neighbours in context. Using TabPFN as the base model -- currently the best tabular in-context learner -- and applying our retrieval and fine-tuning scheme on top results in what we call a locally-calibrated PFN, or LoCalPFN. We conduct extensive evaluation on 95 datasets curated by TabZilla from OpenML, upon which we establish a new state-of-the-art with LoCalPFN -- even with respect to tuned tree-based models. Notably, we show a significant boost in performance compared to the base in-context model, demonstrating the efficacy of our approach and advancing the frontier of deep learning in tabular data.
Retrieval Fine-Tuning for In-Context Tabular Models
[ "Valentin Thomas", "Junwei Ma", "Rasa Hosseinzadeh", "Keyvan Golestan", "Guangwei Yu", "Maksims Volkovs", "Anthony L. Caterini" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=32g9BWTndc
@inproceedings{ wang2024llms, title={{LLM}s as Zero-shot Graph Learners: Alignment of {GNN} Representations with {LLM} Token Embeddings}, author={Duo Wang and Yuan Zuo and Fengzhi Li and Junjie Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=32g9BWTndc} }
Zero-shot graph machine learning, especially with graph neural networks (GNNs), has garnered significant interest due to the challenge of scarce labeled data. While methods like self-supervised learning and graph prompt learning have been extensively explored, they often rely on fine-tuning with task-specific labels, limiting their effectiveness in zero-shot scenarios. Inspired by the zero-shot capabilities of instruction-fine-tuned large language models (LLMs), we introduce a novel framework named Token Embedding-Aligned Graph Language Model (TEA-GLM) that leverages LLMs as cross-dataset and cross-task zero-shot learners for graph machine learning. Concretely, we pretrain a GNN, aligning its representations with token embeddings of an LLM. We then train a linear projector that transforms the GNN's representations into a fixed number of graph token embeddings without tuning the LLM. A unified instruction is designed for various graph tasks at different levels, such as node classification (node-level) and link prediction (edge-level). These design choices collectively enhance our method's effectiveness in zero-shot learning, setting it apart from existing methods. Experiments show that our graph token embeddings help the LLM predictor achieve state-of-the-art performance on unseen datasets and tasks compared to other methods using LLMs as predictors. Our code is available at https://github.com/W-rudder/TEA-GLM.
LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings
[ "Duo Wang", "Yuan Zuo", "Fengzhi Li", "Junjie Wu" ]
NeurIPS.cc/2024/Conference
2408.14512
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=32Z3nfCnwa
@inproceedings{ jia2024how, title={How Does Variance Shape the Regret in Contextual Bandits?}, author={Zeyu Jia and Jian Qian and Alexander Rakhlin and Chen-Yu Wei}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=32Z3nfCnwa} }
We consider realizable contextual bandits with general function approximation, investigating how small reward variance can lead to better-than-minimax regret bounds. Unlike in minimax regret bounds, we show that the eluder dimension $d_{\text{elu}}$$-$a measure of the complexity of the function class$-$plays a crucial role in variance-dependent bounds. We consider two types of adversary: (1) Weak adversary: The adversary sets the reward variance before observing the learner's action. In this setting, we prove that a regret of $\Omega( \sqrt{ \min (A, d_{\text{elu}}) \Lambda } + d_{\text{elu}} )$ is unavoidable when $d_{\text{elu}} \leq \sqrt{A T}$, where $A$ is the number of actions, $T$ is the total number of rounds, and $\Lambda$ is the total variance over $T$ rounds. For the $A\leq d_{\text{elu}}$ regime, we derive a nearly matching upper bound $\tilde{O}( \sqrt{ A\Lambda } + d_{\text{elu} } )$ for the special case where the variance is revealed at the beginning of each round. (2) Strong adversary: The adversary sets the reward variance after observing the learner's action. We show that a regret of $\Omega( \sqrt{ d_{\text{elu}} \Lambda } + d_{\text{elu}} )$ is unavoidable when $\sqrt{ d_{\text{elu}} \Lambda } + d_{\text{elu}} \leq \sqrt{A T}$. In this setting, we provide an upper bound of order $\tilde{O}( d_{\text{elu}}\sqrt{ \Lambda } + d_{\text{elu}} )$. Furthermore, we examine the setting where the function class additionally provides distributional information of the reward, as studied by Wang et al. (2024). We demonstrate that the regret bound $\tilde{O}(\sqrt{d_{\text{elu}} \Lambda} + d_{\text{elu}})$ established in their work is unimprovable when $\sqrt{d_{\text{elu}} \Lambda} + d_{\text{elu}}\leq \sqrt{AT}$. However, with a slightly different definition of the total variance and with the assumption that the reward follows a Gaussian distribution, one can achieve a regret of $\tilde{O}(\sqrt{A\Lambda} + d_{\text{elu}})$.
How Does Variance Shape the Regret in Contextual Bandits?
[ "Zeyu Jia", "Jian Qian", "Alexander Rakhlin", "Chen-Yu Wei" ]
NeurIPS.cc/2024/Conference
2410.12713
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=31xWlIdxTm
@inproceedings{ yuan2024instanceadaptive, title={Instance-adaptive Zero-shot Chain-of-Thought Prompting}, author={Xiaosong Yuan and Chen Shen and Shaotian Yan and Xiao Feng Zhang and Liang Xie and Wenxiao Wang and Renchu Guan and Ying Wang and Jieping Ye}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=31xWlIdxTm} }
Zero-shot Chain-of-Thought (CoT) prompting emerges as a simple and effective strategy for enhancing the performance of large language models (LLMs) in real-world reasoning tasks. Nonetheless, the efficacy of a singular, task-level prompt uniformly applied across the whole of instances is inherently limited since one prompt cannot be a good partner for all, a more appropriate approach should consider the interaction between the prompt and each instance meticulously. This work introduces an instance-adaptive prompting algorithm as an alternative zero-shot CoT reasoning scheme by adaptively differentiating good and bad prompts. Concretely, we first employ analysis on LLMs through the lens of information flow to detect the mechanism under zero-shot CoT reasoning, in which we discover that information flows from question to prompt and question to rationale jointly influence the reasoning results most. We notice that a better zero-shot CoT reasoning needs the prompt to obtain semantic information from the question then the rationale aggregates sufficient information from the question directly and via the prompt indirectly. On the contrary, lacking any of those would probably lead to a bad one. Stem from that, we further propose an instance-adaptive prompting strategy (IAP) for zero-shot CoT reasoning. Experiments conducted with LLaMA-2, LLaMA-3, and Qwen on math, logic, and commonsense reasoning tasks (e.g., GSM8K, MMLU, Causal Judgement) obtain consistent improvement, demonstrating that the instance-adaptive zero-shot CoT prompting performs better than other task-level methods with some curated prompts or sophisticated procedures, showing the significance of our findings in the zero-shot CoT reasoning mechanism.
Instance-adaptive Zero-shot Chain-of-Thought Prompting
[ "Xiaosong Yuan", "Chen Shen", "Shaotian Yan", "Xiao Feng Zhang", "Liang Xie", "Wenxiao Wang", "Renchu Guan", "Ying Wang", "Jieping Ye" ]
NeurIPS.cc/2024/Conference
2409.20441
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=30NS22tgCW
@inproceedings{ zhang2024optimal, title={Optimal Scalarizations for Sublinear Hypervolume Regret}, author={Qiuyi Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=30NS22tgCW} }
Scalarization is a general, parallizable technique that can be deployed in any multiobjective setting to reduce multiple objectives into one, yet some have dismissed this versatile approach because linear scalarizations cannot explore concave regions of the Pareto frontier. To that end, we aim to find simple non-linear scalarizations that provably explore a diverse set of $k$ objectives on the Pareto frontier, as measured by the dominated hypervolume. We show that hypervolume scalarizations with uniformly random weights achieves an optimal sublinear hypervolume regret bound of $O(T^{-1/k})$, with matching lower bounds that preclude any algorithm from doing better asymptotically. For the setting of multiobjective stochastic linear bandits, we utilize properties of hypervolume scalarizations to derive a novel non-Euclidean analysis to get regret bounds of $\tilde{O}( d T^{-1/2} + T^{-1/k})$, removing unnecessary $\text{poly}(k)$ dependencies. We support our theory with strong empirical performance of using non-linear scalarizations that outperforms both their linear counterparts and other standard multiobjective algorithms in a variety of natural settings.
Optimal Scalarizations for Sublinear Hypervolume Regret
[ "Qiuyi Zhang" ]
NeurIPS.cc/2024/Conference
2307.03288
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2zWbzx50mH
@inproceedings{ gross2024compact, title={Compact Proofs of Model Performance via Mechanistic Interpretability}, author={Jason Gross and Rajashree Agrawal and Thomas Kwa and Euan Ong and Chun Hei Yip and Alex Gibson and Soufiane Noubir and Lawrence Chan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2zWbzx50mH} }
We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving accuracy lower bounds for a small transformer trained on Max-of-$K$, validating proof transferability across 151 random seeds and four values of $K$. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless errors as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.
Compact Proofs of Model Performance via Mechanistic Interpretability
[ "Jason Gross", "Rajashree Agrawal", "Thomas Kwa", "Euan Ong", "Chun Hei Yip", "Alex Gibson", "Soufiane Noubir", "Lawrence Chan" ]
NeurIPS.cc/2024/Conference
2406.11779
[ "https://github.com/jasongross/guarantees-based-mechanistic-interpretability-with-data" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2xTkeyJFJb
@inproceedings{ tang2024generative, title={Generative Retrieval Meets Multi-Graded Relevance}, author={Yubao Tang and Ruqing Zhang and Jiafeng Guo and Maarten de Rijke and Wei Chen and Xueqi Cheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2xTkeyJFJb} }
Generative retrieval represents a novel approach to information retrieval, utilizing an encoder-decoder architecture to directly produce relevant document identifiers (docids) for queries. While this method offers benefits, current implementations are limited to scenarios with binary relevance data, overlooking the potential for documents to have multi-graded relevance. Extending generative retrieval to accommodate multi-graded relevance poses challenges, including the need to reconcile likelihood probabilities for docid pairs and the possibility of multiple relevant documents sharing the same identifier. To address these challenges, we introduce a new framework called GRaded Generative Retrieval (GR$^2$). Our approach focuses on two key components: ensuring relevant and distinct identifiers, and implementing multi-graded constrained contrastive training. Firstly, we aim to create identifiers that are both semantically relevant and sufficiently distinct to represent individual documents effectively. This is achieved by jointly optimizing the relevance and distinctness of docids through a combination of docid generation and autoencoder models. Secondly, we incorporate information about the relationship between relevance grades to guide the training process. Specifically, we leverage a constrained contrastive training strategy to bring the representations of queries and the identifiers of their relevant documents closer together, based on their respective relevance grades.Extensive experiments on datasets with both multi-graded and binary relevance demonstrate the effectiveness of our method.
Generative Retrieval Meets Multi-Graded Relevance
[ "Yubao Tang", "Ruqing Zhang", "Jiafeng Guo", "Maarten de Rijke", "Wei Chen", "Xueqi Cheng" ]
NeurIPS.cc/2024/Conference
2409.18409
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=2wlNnIqCb7
@inproceedings{ gualdoni2024bridging, title={Bridging semantics and pragmatics in information-theoretic emergent communication}, author={Eleonora Gualdoni and Mycal Tucker and Roger P. Levy and Noga Zaslavsky}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2wlNnIqCb7} }
Human languages support both semantic categorization and local pragmatic interactions that require context-sensitive reasoning about meaning. While semantics and pragmatics are two fundamental aspects of language, they are typically studied independently and their co-evolution is largely under-explored. Here, we aim to bridge this gap by studying how a shared lexicon may emerge from local pragmatic interactions. To this end, we extend a recent information-theoretic framework for emergent communication in artificial agents, which integrates utility maximization, associated with pragmatics, with general communicative constraints that are believed to shape human semantic systems. Specifically, we show how to adapt this framework to train agents via unsupervised pragmatic interactions, and then evaluate their emergent lexical semantics. We test this approach in a rich visual domain of naturalistic images, and find that key human-like properties of the lexicon emerge when agents are guided by both context-specific utility and general communicative pressures, suggesting that both aspects are crucial for understanding how language may evolve in humans and in artificial agents.
Bridging semantics and pragmatics in information-theoretic emergent communication
[ "Eleonora Gualdoni", "Mycal Tucker", "Roger P. Levy", "Noga Zaslavsky" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2wfd3pti8v
@inproceedings{ agrawal2024automated, title={Automated Efficient Estimation using Monte Carlo Efficient Influence Functions}, author={Raj Agrawal and Sam Witty and Andy Zane and Eli Bingham}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2wfd3pti8v} }
Many practical problems involve estimating low dimensional statistical quantities with high-dimensional models and datasets. Several approaches address these estimation tasks based on the theory of influence functions, such as debiased/double ML or targeted minimum loss estimation. We introduce \textit{Monte Carlo Efficient Influence Functions} (MC-EIF), a fully automated technique for approximating efficient influence functions that integrates seamlessly with existing differentiable probabilistic programming systems. MC-EIF automates efficient statistical estimation for a broad class of models and functionals that previously required rigorous custom analysis. We prove that MC-EIF is consistent, and that estimators using MC-EIF achieve optimal $\sqrt{N}$ convergence rates. We show empirically that estimators using MC-EIF are at parity with estimators using analytic EIFs. Finally, we present a novel capstone example using MC-EIF for optimal portfolio selection.
Automated Efficient Estimation using Monte Carlo Efficient Influence Functions
[ "Raj Agrawal", "Sam Witty", "Andy Zane", "Eli Bingham" ]
NeurIPS.cc/2024/Conference
2403.00158
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=2wMJ4wq4az
@inproceedings{ hang2024exploring, title={Exploring Fixed Point in Image Editing: Theoretical Support and Convergence Optimization}, author={chen hang and Zhe Ma and Haoming Chen and Xuwei Fang and Weisheng Xie and Faming Fang and Guixu Zhang and Hongbin Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2wMJ4wq4az} }
In image editing, Denoising Diffusion Implicit Models (DDIM) inversion has become a widely adopted method and is extensively used in various image editing approaches. The core concept of DDIM inversion stems from the deterministic sampling technique of DDIM, which allows the DDIM process to be viewed as an Ordinary Differential Equation (ODE) process that is reversible. This enables the prediction of corresponding noise from a reference image, ensuring that the restored image from this noise remains consistent with the reference image. Image editing exploits this property by modifying the cross-attention between text and images to edit specific objects while preserving the remaining regions. However, in the DDIM inversion, using the $t-1$ time step to approximate the noise prediction at time step $t$ introduces errors between the restored image and the reference image. Recent approaches have modeled each step of the DDIM inversion process as finding a fixed-point problem of an implicit function. This approach significantly mitigates the error in the restored image but lacks theoretical support regarding the existence of such fixed points. Therefore, this paper focuses on the study of fixed points in DDIM inversion and provides theoretical support. Based on the obtained theoretical insights, we further optimize the loss function for the convergence of fixed points in the original DDIM inversion, improving the visual quality of the edited image. Finally, we extend the fixed-point based image editing to the application of unsupervised image dehazing, introducing a novel text-based approach for unsupervised dehazing.
Exploring Fixed Point in Image Editing: Theoretical Support and Convergence Optimization
[ "chen hang", "Zhe Ma", "Haoming Chen", "Xuwei Fang", "Weisheng Xie", "Faming Fang", "Guixu Zhang", "Hongbin Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2vywag2lVC
@inproceedings{ montenegro2024lastiterate, title={Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning}, author={Alessandro Montenegro and Marco Mussi and Matteo Papini and Alberto Maria Metelli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2vywag2lVC} }
*Constrained Reinforcement Learning* (CRL) tackles sequential decision-making problems where agents are required to achieve goals by maximizing the expected return while meeting domain-specific constraints, which are often formulated on expected costs. In this setting, *policy-based* methods are widely used since they come with several advantages when dealing with continuous-control problems. These methods search in the policy space with an *action-based* or *parameter-based* exploration strategy, depending on whether they learn directly the parameters of a stochastic policy or those of a stochastic hyperpolicy. In this paper, we propose a general framework for addressing CRL problems via *gradient-based primal-dual* algorithms, relying on an alternate ascent/descent scheme with dual-variable regularization. We introduce an exploration-agnostic algorithm, called C-PG, which exhibits global last-iterate convergence guarantees under (weak) gradient domination assumptions, improving and generalizing existing results. Then, we design C-PGAE and C-PGPE, the action-based and the parameter-based versions of C-PG, respectively, and we illustrate how they naturally extend to constraints defined in terms of *risk measures* over the costs, as it is often requested in safety-critical scenarios. Finally, we numerically validate our algorithms on constrained control problems, and compare them with state-of-the-art baselines, demonstrating their effectiveness.
Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning
[ "Alessandro Montenegro", "Marco Mussi", "Matteo Papini", "Alberto Maria Metelli" ]
NeurIPS.cc/2024/Conference
2407.10775
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2vMvh5XP0P
@inproceedings{ dihlmann2024subsurface, title={Subsurface Scattering for Gaussian Splatting}, author={Jan-Niklas Dihlmann and Arjun Majumdar and Andreas Engelhardt and Raphael Braun and Hendrik Lensch}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2vMvh5XP0P} }
3D reconstruction and relighting of objects made from scattering materials present a significant challenge due to the complex light transport beneath the surface. 3D Gaussian Splatting introduced high-quality novel view synthesis at real-time speeds. While 3D Gaussians efficiently approximate an object's surface, they fail to capture the volumetric properties of subsurface scattering. We propose a framework for optimizing an object's shape together with the radiance transfer field given multi-view OLAT (one light at a time) data. Our method decomposes the scene into an explicit surface represented as 3D Gaussians, with a spatially varying BRDF, and an implicit volumetric representation of the scattering component. A learned incident light field accounts for shadowing. We optimize all parameters jointly via ray-traced differentiable rendering. Our approach enables material editing, relighting, and novel view synthesis at interactive rates. We show successful application on synthetic data and contribute a newly acquired multi-view multi-light dataset of objects in a light-stage setup. Compared to previous work we achieve comparable or better results at a fraction of optimization and rendering time while enabling detailed control over material attributes.
Subsurface Scattering for Gaussian Splatting
[ "Jan-Niklas Dihlmann", "Arjun Majumdar", "Andreas Engelhardt", "Raphael Braun", "Hendrik Lensch" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2uy3LZHNIG
@inproceedings{ wu2024smart, title={{SMART}: Scalable Multi-agent Real-time Motion Generation via Next-token Prediction}, author={Wei Wu and Xiaoxin Feng and Ziyan Gao and Yuheng KAN}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2uy3LZHNIG} }
Data-driven autonomous driving motion generation tasks are frequently impacted by the limitations of dataset size and the domain gap between datasets, which precludes their extensive application in real-world scenarios. To address this issue, we introduce SMART, a novel autonomous driving motion generation paradigm that models vectorized map and agent trajectory data into discrete sequence tokens. These tokens are then processed through a decoder-only transformer architecture to train for the next token prediction task across spatial-temporal series. This GPT-style method allows the model to learn the motion distribution in real driving scenarios. SMART achieves state-of-the-art performance across most of the metrics on the generative Sim Agents challenge, ranking 1st on the leaderboards of Waymo Open Motion Dataset (WOMD), demonstrating remarkable inference speed. Moreover, SMART represents the generative model in the autonomous driving motion domain, exhibiting zero-shot generalization capabilities: Using only the NuPlan dataset for training and WOMD for validation, SMART achieved a competitive score of 0.72 on the Sim Agents challenge. Lastly, we have collected over 1 billion motion tokens from multiple datasets, validating the model's scalability. These results suggest that SMART has initially emulated two important properties: scalability and zero-shot generalization, and preliminarily meets the needs of large-scale real-time simulation applications. We have released all the code to promote the exploration of models for motion generation in the autonomous driving field. The source code is available at https://github.com/rainmaker22/SMART.
SMART: Scalable Multi-agent Real-time Motion Generation via Next-token Prediction
[ "Wei Wu", "Xiaoxin Feng", "Ziyan Gao", "Yuheng KAN" ]
NeurIPS.cc/2024/Conference
2405.15677
[ "https://github.com/rainmaker22/SMART" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2squ766Iq4
@inproceedings{ kong2024towards, title={Towards Understanding Extrapolation: a Causal Lens}, author={Lingjing Kong and Guangyi Chen and Petar Stojanov and Haoxuan Li and Eric P. Xing and Kun Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2squ766Iq4} }
Canonical work handling distribution shifts typically necessitates an entire target distribution that lands inside the training distribution. However, practical scenarios often involve only a handful target samples, potentially lying outside the training support, which requires the capability of extrapolation. In this work, we aim to provide a theoretical understanding of when extrapolation is possible and offer principled methods to achieve it without requiring an on-support target distribution. To this end, we formulate the extrapolation problem with a latent-variable model that embodies the minimal change principle in causal mechanisms. Under this formulation, we cast the extrapolation problem into a latent-variable identification problem. We provide realistic conditions on shift properties and the estimation objectives that lead to identification even when only one off-support target sample is available, tackling the most challenging scenarios. Our theory reveals the intricate interplay between the underlying manifold's smoothness and the shift properties. We showcase how our theoretical results inform the design of practical adaptation algorithms. Through experiments on both synthetic and real-world data, we validate our theoretical findings and their practical implications.
Towards Understanding Extrapolation: a Causal Lens
[ "Lingjing Kong", "Guangyi Chen", "Petar Stojanov", "Haoxuan Li", "Eric P. Xing", "Kun Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2pgc5xDJ1b
@inproceedings{ ek2024externally, title={Externally Valid Policy Evaluation from Randomized Trials Using Additional Observational Data}, author={Sofia Ek and Dave Zachariah}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2pgc5xDJ1b} }
Randomized trials are widely considered as the gold standard for evaluating the effects of decision policies. Trial data is, however, drawn from a population which may differ from the intended target population and this raises a problem of external validity (aka. generalizability). In this paper we seek to use trial data to draw valid inferences about the outcome of a policy on the target population. Additional covariate data from the target population is used to model the sampling of individuals in the trial study. We develop a method that yields certifiably valid trial-based policy evaluations under any specified range of model miscalibrations. The method is nonparametric and the validity is assured even with finite samples. The certified policy evaluations are illustrated using both simulated and real data.
Externally Valid Policy Evaluation from Randomized Trials Using Additional Observational Data
[ "Sofia Ek", "Dave Zachariah" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2oZea6pKhl
@inproceedings{ ding2024radarocc, title={RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar}, author={Fangqiang Ding and Xiangyu Wen and Yunzhou Zhu and Yiming Li and Chris Xiaoxuan Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2oZea6pKhl} }
3D occupancy-based perception pipeline has significantly advanced autonomous driving by capturing detailed scene descriptions and demonstrating strong generalizability across various object categories and shapes. Current methods predominantly rely on LiDAR or camera inputs for 3D occupancy prediction. These methods are susceptible to adverse weather conditions, limiting the all-weather deployment of self-driving cars. To improve perception robustness, we leverage the recent advances in automotive radars and introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction. Our method, RadarOcc, circumvents the limitations of sparse radar point clouds by directly processing the 4D radar tensor, thus preserving essential scene details. RadarOcc innovatively addresses the challenges associated with the voluminous and noisy 4D radar data by employing Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms. To minimize the interpolation errors associated with direct coordinate transformations, we also devise a spherical-based feature encoding followed by spherical-to-Cartesian feature aggregation. We benchmark various baseline methods based on distinct modalities on the public K-Radar dataset. The results demonstrate RadarOcc's state-of-the-art performance in radar-based 3D occupancy prediction and promising results even when compared with LiDAR- or camera-based methods. Additionally, we present qualitative evidence of the superior performance of 4D radar in adverse weather conditions and explore the impact of key pipeline components through ablation studies.
RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar
[ "Fangqiang Ding", "Xiangyu Wen", "Yunzhou Zhu", "Yiming Li", "Chris Xiaoxuan Lu" ]
NeurIPS.cc/2024/Conference
2405.14014
[ "https://github.com/toytiny/radarocc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2nvkD0sPOk
@inproceedings{ wang2024del, title={{DEL}: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering}, author={Jiaxu Wang and Jingkai SUN and Ziyi Zhang and Junhao He and Qiang Zhang and Mingyuan Sun and Renjing Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2nvkD0sPOk} }
Learning-based simulators show great potential for simulating particle dynamics when 3D groundtruth is available, but per-particle correspondences are not always accessible. The development of neural rendering presents a new solution to this field to learn 3D dynamics from 2D images by inverse rendering. However, existing approaches still suffer from ill-posed natures resulting from the 2D to 3D uncertainty, for example, specific 2D images can correspond with various 3D particle distributions. To mitigate such uncertainty, we consider a conventional, mechanically interpretable framework as the physical priors and extend it to a learning-based version. In brief, we incorporate the learnable graph kernels into the classic Discrete Element Analysis (DEA) framework to implement a novel mechanics-informed network architecture. In this case, the graph networks are only used for approximating some specific mechanical operators in the DEA framework rather than the whole dynamics mapping. By integrating the strong physics priors, our methods can effectively learn the dynamics of various materials from the partial 2D observations in a unified manner. Experiments show that our approach outperforms other learned simulators by a large margin in this context and is robust to different renderers, fewer training samples, and fewer camera views.
DEL: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering
[ "Jiaxu Wang", "Jingkai SUN", "Ziyi Zhang", "Junhao He", "Qiang Zhang", "Mingyuan Sun", "Renjing Xu" ]
NeurIPS.cc/2024/Conference
2410.08983
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2nisrxMMQR
@inproceedings{ zhou2024metaexploiting, title={Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning}, author={Fei Zhou and Peng Wang and Lei Zhang and Zhenghua Chen and Wei Wei and Chen Ding and Guosheng Lin and Yanning Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2nisrxMMQR} }
Meta-learning offers a promising avenue for few-shot learning (FSL), enabling models to glean a generalizable feature embedding through episodic training on synthetic FSL tasks in a source domain. Yet, in practical scenarios where the target task diverges from that in the source domain, meta-learning based method is susceptible to over-fitting. To overcome this, we introduce a novel framework, Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning, which is crafted to comprehensively exploit the cross-domain transferable image prior that each image can be decomposed into complementary low-frequency content details and high-frequency robust structural characteristics. Motivated by this insight, we propose to decompose each query image into its high-frequency and low-frequency components, and parallel incorporate them into the feature embedding network to enhance the final category prediction. More importantly, we introduce a feature reconstruction prior and a prediction consistency prior to separately encourage the consistency of the intermediate feature as well as the final category prediction between the original query image and its decomposed frequency components. This allows for collectively guiding the network's meta-learning process with the aim of learning generalizable image feature embeddings, while not introducing any extra computational cost in the inference phase. Our framework establishes new state-of-the-art results on multiple cross-domain few-shot learning benchmarks.
Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning
[ "Fei Zhou", "Peng Wang", "Lei Zhang", "Zhenghua Chen", "Wei Wei", "Chen Ding", "Guosheng Lin", "Yanning Zhang" ]
NeurIPS.cc/2024/Conference
2411.01432
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2n1Ysn1EDl
@inproceedings{ jafari2024mambalrp, title={Mamba{LRP}: Explaining Selective State Space Sequence Models}, author={Farnoush Rezaei Jafari and Gr{\'e}goire Montavon and Klaus Robert Muller and Oliver Eberle}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2n1Ysn1EDl} }
Recent sequence modeling approaches using selective state space sequence models, referred to as Mamba models, have seen a surge of interest. These models allow efficient processing of long sequences in linear time and are rapidly being adopted in a wide range of applications such as language modeling, demonstrating promising performance. To foster their reliable use in real-world scenarios, it is crucial to augment their transparency. Our work bridges this critical gap by bringing explainability, particularly Layer-wise Relevance Propagation (LRP), to the Mamba architecture. Guided by the axiom of relevance conservation, we identify specific components in the Mamba architecture, which cause unfaithful explanations. To remedy this issue, we propose MambaLRP, a novel algorithm within the LRP framework, which ensures a more stable and reliable relevance propagation through these components. Our proposed method is theoretically sound and excels in achieving state-of-the-art explanation performance across a diverse range of models and datasets. Moreover, MambaLRP facilitates a deeper inspection of Mamba architectures, uncovering various biases and evaluating their significance. It also enables the analysis of previous speculations regarding the long-range capabilities of Mamba models.
MambaLRP: Explaining Selective State Space Sequence Models
[ "Farnoush Rezaei Jafari", "Grégoire Montavon", "Klaus Robert Muller", "Oliver Eberle" ]
NeurIPS.cc/2024/Conference
2406.07592
[ "https://github.com/FarnoushRJ/MambaLRP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2mqiTiJKrx
@inproceedings{ zhao2024adaptive, title={Adaptive Experimentation When You Can't Experiment}, author={Yao Zhao and Kwang-Sung Jun and Tanner Fiez and Lalit K Jain}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2mqiTiJKrx} }
This paper introduces the confounded pure exploration transductive linear bandit (CPET-LB) problem. As a motivating example, often online services cannot directly assign users to specific control or treatment experiences either for business or practical reasons. In these settings, naively comparing treatment and control groups that may result from self-selection can lead to biased estimates of underlying treatment effects. Instead, online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment. Our methodology provides online services with an adaptive experimental design approach for learning the best-performing treatment for such encouragement designs. We consider a more general underlying model captured by a linear structural equation and formulate pure exploration linear bandits in this setting. Though pure exploration has been extensively studied in standard adaptive experimental design settings, we believe this is the first work considering a setting where noise is confounded. Elimination-style algorithms using experimental design methods in combination with a novel finite-time confidence interval on an instrumental variable style estimator are presented with sample complexity upper bounds nearly matching a minimax lower bound. Finally, experiments are conducted that demonstrate the efficacy of our approach.
Adaptive Experimentation When You Can't Experiment
[ "Yao Zhao", "Kwang-Sung Jun", "Tanner Fiez", "Lalit K Jain" ]
NeurIPS.cc/2024/Conference
2406.10738
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2ltOkbo67R
@inproceedings{ zhang2024contextual, title={Contextual Multinomial Logit Bandits with General Value Functions}, author={Mengxiao Zhang and Haipeng Luo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2ltOkbo67R} }
Contextual multinomial logit (MNL) bandits capture many real-world assortment recommendation problems such as online retailing/advertising. However, prior work has only considered (generalized) linear value functions, which greatly limits its applicability. Motivated by this fact, in this work, we consider contextual MNL bandits with a general value function class that contains the ground truth, borrowing ideas from a recent trend of studies on contextual bandits. Specifically, we consider both the stochastic and the adversarial settings, and propose a suite of algorithms, each with different computation-regret trade-off. When applied to the linear case, our results not only are the first ones with no dependence on a certain problem-dependent constant that can be exponentially large, but also enjoy other advantages such as computational efficiency, dimension-free regret bounds, or the ability to handle completely adversarial contexts and rewards.
Contextual Multinomial Logit Bandits with General Value Functions
[ "Mengxiao Zhang", "Haipeng Luo" ]
NeurIPS.cc/2024/Conference
2402.08126
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2lL7s5ESTj
@inproceedings{ woude2024replicability, title={Replicability in Learning: Geometric Partitions and {KKM}-Sperner Lemma}, author={Jason Vander Woude and Peter Dixon and A. Pavan and Jamie Radcliffe and N. V. Vinodchandran}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2lL7s5ESTj} }
This paper studies replicability in machine learning tasks from a geometric viewpoint. Recent works have revealed the role of geometric partitions and Sperner's lemma (and its variations) in designing replicable learning algorithms and in establishing impossibility results. A partition $\mathcal{P}$ of $\mathbb{R}^d$ is called a $(k,\epsilon)$-secluded partition if for every $\vec{p}\in\mathbb{R}^d$, an $\varepsilon$-radius ball (with respect to the $\ell_{\infty}$ norm) centered at $\vec{p}$ intersects at most $k$ members of $\mathcal{P}$. In relation to replicable learning, the parameter $k$ is closely related to the $\textit{list complexity}$, and the parameter $\varepsilon$ is related to the sample complexity of the replicable learner. Construction of secluded partitions with better parameters (small $k$ and large $\varepsilon$) will lead to replicable learning algorithms with small list and sample complexities. Motivated by this connection, we undertake a comprehensive study of secluded partitions and establish near-optimal relationships between $k$ and $\varepsilon$. 1. We show that for any $(k,\epsilon)$-secluded partition where each member has at most unit measure, it must be that $k \geq(1+2\varepsilon)^d$, and consequently, for the interesting regime $k\in[2^d]$ it must be that $\epsilon\leq\frac{\log_4(k)}{d}$. 2. To complement this upper bound on $\epsilon$, we show that for each $d\in\mathbb{N}$ and each viable $k\in[2^d]$, a construction of a $(k,\epsilon)$-secluded (unit cube) partition with $\epsilon\geq\frac{\log_4(k)}{d}\cdot\frac{1}{8\log_4(d+1)}$. This establishes the optimality of $\epsilon$ within a logarithmic factor. 3. Finally, we adapt our proof techniques to obtain a new ``neighborhood'' variant of the cubical KKM lemma (or cubical Sperner's lemma): For any coloring of $[0,1]^d$ in which no color is used on opposing faces, it holds for each $\epsilon\in(0,\frac12]$ that there is a point where the open $\epsilon$-radius $\ell_\infty$-ball intersects at least $(1+\frac23\epsilon)^d$ colors. While the classical Sperner/KKM lemma guarantees the existence of a point that is "adjacent" to points with $(d+1)$ distinct colors, the neighborhood version guarantees the existence of a small neighborhood with exponentially many points with distinct colors.
Replicability in Learning: Geometric Partitions and KKM-Sperner Lemma
[ "Jason Vander Woude", "Peter Dixon", "A. Pavan", "Jamie Radcliffe", "N. V. Vinodchandran" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2kZMtdjzSV
@inproceedings{ duong2024beyond, title={Beyond task diversity: provable representation transfer for sequential multitask linear bandits}, author={Thang Duong and Zhi Wang and Chicheng Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2kZMtdjzSV} }
We study lifelong learning in linear bandits, where a learner interacts with a sequence of linear bandit tasks whose parameters lie in an $m$-dimensional subspace of $\mathbb{R}^d$, thereby sharing a low-rank representation. Current literature typically assumes that the tasks are diverse, i.e., their parameters uniformly span the $m$-dimensional subspace. This assumption allows the low-rank representation to be learned before all tasks are revealed, which can be unrealistic in real-world applications. In this work, we present the first nontrivial result for sequential multi-task linear bandits without the task diversity assumption. We develop an algorithm that efficiently learns and transfers low-rank representations. When facing $N$ tasks, each played over $\tau$ rounds, our algorithm achieves a regret guarantee of $\tilde{O}\big (Nm \sqrt{\tau} + N^{\frac{2}{3}} \tau^{\frac{2}{3}} d m^{\frac13} + Nd^2 + \tau m d \big)$ under the ellipsoid action set assumption.
Beyond task diversity: provable representation transfer for sequential multitask linear bandits
[ "Thang Duong", "Zhi Wang", "Chicheng Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2jjfRm2R6D
@inproceedings{ jiang2024multilanguage, title={Multi-language Diversity Benefits Autoformalization}, author={Albert Q. Jiang and Wenda Li and Mateja Jamnik}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2jjfRm2R6D} }
Autoformalization is the task of translating natural language materials into machine-verifiable formalisations. Progress in autoformalization research is hindered by the lack of a sizeable dataset consisting of informal-formal pairs expressing the same essence. Existing methods tend to circumvent this challenge by manually curating small corpora or using few-shot learning with large language models. But these methods suffer from data scarcity and formal language acquisition difficulty. In this work, we create mma, a large, flexible, multi-language, and multi-domain dataset of informal-formal pairs, by using a language model to translate in the reverse direction, that is, from formal mathematical statements into corresponding informal ones. Experiments show that language models fine-tuned on mma can produce up to $29-31$\% of statements acceptable with minimal corrections on the miniF2F and ProofNet benchmarks, up from $0$\% with the base model. We demonstrate that fine-tuning on multi-language formal data results in more capable autoformalization models even on single-language tasks.
Multi-language Diversity Benefits Autoformalization
[ "Albert Q. Jiang", "Wenda Li", "Mateja Jamnik" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2hqHWD7wDb
@inproceedings{ kong2024quantitative, title={Quantitative Convergences of Lie Group Momentum Optimizers}, author={Lingkai Kong and Molei Tao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2hqHWD7wDb} }
Explicit, momentum-based dynamics that optimize functions defined on Lie groups can be constructed via variational optimization and momentum trivialization. Structure preserving time discretizations can then turn this dynamics into optimization algorithms. This article investigates two types of discretization, Lie Heavy-Ball, which is a known splitting scheme, and Lie NAG-SC, which is newly proposed. Their convergence rates are explicitly quantified under $L$-smoothness and \emph{local} strong convexity assumptions. Lie NAG-SC provides acceleration over the momentumless case, i.e. Riemannian gradient descent, but Lie Heavy-Ball does not. When compared to existing accelerated optimizers for general manifolds, both Lie Heavy-Ball and Lie NAG-SC are computationally cheaper and easier to implement, thanks to their utilization of group structure. Only gradient oracle and exponential map are required, but not logarithm map or parallel transport which are computational costly.
Quantitative Convergences of Lie Group Momentum Optimizers
[ "Lingkai Kong", "Molei Tao" ]
NeurIPS.cc/2024/Conference
2405.20390
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2gtNa14V45
@inproceedings{ wang2024oneactor, title={OneActor: Consistent Subject Generation via Cluster-Conditioned Guidance}, author={Jiahao Wang and Caixia Yan and Haonan Lin and Weizhan Zhang and Mengmeng Wang and Tieliang Gong and Guang Dai and Hao Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2gtNa14V45} }
Text-to-image diffusion models benefit artists with high-quality image generation. Yet their stochastic nature hinders artists from creating consistent images of the same subject. Existing methods try to tackle this challenge and generate consistent content in various ways. However, they either depend on external restricted data or require expensive tuning of the diffusion model. For this issue, we propose a novel one-shot tuning paradigm, termed OneActor. It efficiently performs consistent subject generation solely driven by prompts via a learned semantic guidance to bypass the laborious backbone tuning. We lead the way to formalize the objective of consistent subject generation from a clustering perspective, and thus design a cluster-conditioned model. To mitigate the overfitting challenge shared by one-shot tuning pipelines, we augment the tuning with auxiliary samples and devise two inference strategies: semantic interpolation and cluster guidance. These techniques are later verified to significantly improve the generation quality. Comprehensive experiments show that our method outperforms a variety of baselines with satisfactory subject consistency, superior prompt conformity as well as high image quality. Our method is capable of multi-subject generation and compatible with popular diffusion extensions. Besides, we achieve a $4\times$ faster tuning speed than tuning-based baselines and, if desired, avoid increasing the inference time. Furthermore, our method can be naturally utilized to pre-train a consistent subject generation network from scratch, which will implement this research task into more practical applications. (Project page: https://johnneywang.github.io/OneActor-webpage/)
OneActor: Consistent Subject Generation via Cluster-Conditioned Guidance
[ "Jiahao Wang", "Caixia Yan", "Haonan Lin", "Weizhan Zhang", "Mengmeng Wang", "Tieliang Gong", "Guang Dai", "Hao Sun" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2fiYzs3YkH
@inproceedings{ zhang2024unleashing, title={Unleashing the Denoising Capability of Diffusion Prior for Solving Inverse Problems}, author={Jiawei Zhang and Jiaxin Zhuang and Cheng Jin and Gen Li and Yuantao Gu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2fiYzs3YkH} }
The recent emergence of diffusion models has significantly advanced the precision of learnable priors, presenting innovative avenues for addressing inverse problems. Previous works have endeavored to integrate diffusion priors into the maximum a posteriori estimation (MAP) framework and design optimization methods to solve the inverse problem. However, prevailing optimization-based rithms primarily exploit the prior information within the diffusion models while neglecting their denoising capability. To bridge this gap, this work leverages the diffusion process to reframe noisy inverse problems as a two-variable constrained optimization task by introducing an auxiliary optimization variable that represents a 'noisy' sample at an equivalent denoising step. The projection gradient descent method is efficiently utilized to solve the corresponding optimization problem by truncating the gradient through the $\mu$-predictor. The proposed algorithm, termed ProjDiff, effectively harnesses the prior information and the denoising capability of a pre-trained diffusion model within the optimization framework. Extensive experiments on the image restoration tasks and source separation and partial generation tasks demonstrate that ProjDiff exhibits superior performance across various linear and nonlinear inverse problems, highlighting its potential for practical applications. Code is available at https://github.com/weigerzan/ProjDiff/.
Unleashing the Denoising Capability of Diffusion Prior for Solving Inverse Problems
[ "Jiawei Zhang", "Jiaxin Zhuang", "Cheng Jin", "Gen Li", "Yuantao Gu" ]
NeurIPS.cc/2024/Conference
2406.06959
[ "https://github.com/weigerzan/projdiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2dfBpyqh0A
@inproceedings{ zhang2024gaussian, title={Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images}, author={Shengjun Zhang and Xin Fei and Fangfu Liu and Haixu Song and Yueqi Duan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2dfBpyqh0A} }
3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis performance. While conventional methods require per-scene optimization, more recently several feed-forward methods have been proposed to generate pixel-aligned Gaussian representations with a learnable network, which are generalizable to different scenes. However, these methods simply combine pixel-aligned Gaussians from multiple views as scene representations, thereby leading to artifacts and extra memory cost without fully capturing the relations of Gaussians from different images. In this paper, we propose Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian representations. Specifically, we construct Gaussian Graphs to model the relations of Gaussian groups from different views. To support message passing at Gaussian level, we reformulate the basic graph operations over Gaussian representations, enabling each Gaussian to benefit from its connected Gaussian groups with Gaussian feature fusion. Furthermore, we design a Gaussian pooling layer to aggregate various Gaussian groups for efficient representations. We conduct experiments on the large-scale RealEstate10K and ACID datasets to demonstrate the efficiency and generalization of our method. Compared to the state-of-the-art methods, our model uses fewer Gaussians and achieves better image quality with higher rendering speed.
Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images
[ "Shengjun Zhang", "Xin Fei", "Fangfu Liu", "Haixu Song", "Yueqi Duan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2cczgOfMP4
@inproceedings{ zhang2024chain, title={Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in {LLM}s}, author={Xuan Zhang and Chao Du and Tianyu Pang and Qian Liu and Wei Gao and Min Lin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2cczgOfMP4} }
The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tree-searching to extensively explore the reasoning space and find better reasoning paths that CoT decoding might overlook. This deliberation, however, comes at the cost of significantly increased inference complexity. In this work, we demonstrate that fine-tuning LLMs leveraging the search tree constructed by ToT allows CoT to achieve similar or better performance, thereby avoiding the substantial inference burden. This is achieved through \emph{Chain of Preference Optimization} (CPO), where LLMs are fine-tuned to align each step of the CoT reasoning paths with those of ToT using the inherent preference information in the tree-search process. Extensive experimental results show that CPO significantly improves LLM performance in solving a variety of complex problems, including question answering, fact verification, and arithmetic reasoning, demonstrating its effectiveness. Our code is available at [https://github.com/sail-sg/CPO](https://github.com/sail-sg/CPO).
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs
[ "Xuan Zhang", "Chao Du", "Tianyu Pang", "Qian Liu", "Wei Gao", "Min Lin" ]
NeurIPS.cc/2024/Conference
2406.09136
[ "https://github.com/sail-sg/cpo" ]
https://huggingface.co/papers/2406.09136
2
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=2cQ3lPhkeO
@inproceedings{ liu2024provably, title={Provably Mitigating Overoptimization in {RLHF}: Your {SFT} Loss is Implicitly an Adversarial Regularizer}, author={Zhihan Liu and Miao Lu and Shenao Zhang and Boyi Liu and Hongyi Guo and Yingxiang Yang and Jose Blanchet and Zhaoran Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2cQ3lPhkeO} }
Aligning generative models with human preference via RLHF typically suffers from overoptimization, where an imperfectly learned reward model can misguide the generative model to output even undesired responses. We investigate this problem in a principled manner by identifying the source of the issue as the distributional shift and uncertainty of human preference in dataset. To mitigate overoptimization, we first propose a theoretical algorithm which optimizes the policy against an adversarially chosen reward model, one that simultaneously minimizes its MLE loss and a reward penalty term. The penalty pessimistically biases the uncertain rewards so as to prevent the policy from choosing actions with spursiouly high proxy rewards, resulting in provable sample efficiency of the algorithm under a partial coverage style condition. Moving from theory to practice, the proposed algorithm further enjoys an equivalent but surprisingly easy to implement form. With a clever usage of the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines (i) a preference optimization loss that directly aligns the policy with human preference, and (ii) a supervised learning loss which explicitly imitates the policy with a baseline distribution. In the context of aligning large language models (LLM), this objective fuses the direct preference optimization (DPO) loss with the supervised fune-tuning (SFT) loss to help mitigate the overoptimization towards undesired responses, for which we name the algorithm Regularized Preference Optimization (RPO). Experiments of aligning LLMs demonstrate the improved performance of our method when compared with DPO baselines. Our work sheds light on the interplay between preference optimization and SFT in tuning LLMs with both theoretical guarantees and empirical evidence.
Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer
[ "Zhihan Liu", "Miao Lu", "Shenao Zhang", "Boyi Liu", "Hongyi Guo", "Yingxiang Yang", "Jose Blanchet", "Zhaoran Wang" ]
NeurIPS.cc/2024/Conference
2405.16436
[ "" ]
https://huggingface.co/papers/2405.16436
1
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=2cFUYnNL1m
@inproceedings{ xie2024weight, title={Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments}, author={Mixue Xie and Shuang Li and Binhui Xie and Chi Harold Liu and Jian Liang and Zixun Sun and Ke Feng and Chengwei Zhu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2cFUYnNL1m} }
Enabling deep models to generalize in non-stationary environments is vital for real-world machine learning, as data distributions are often found to continually change. Recently, evolving domain generalization (EDG) has emerged to tackle the domain generalization in a time-varying system, where the domain gradually evolves over time in an underlying continuous structure. Nevertheless, it typically assumes multiple source domains simultaneously ready. It still remains an open problem to address EDG in the domain-incremental setting, where source domains are non-static and arrive sequentially to mimic the evolution of training domains. To this end, we propose Weight Diffusion (W-Diff), a novel framework that utilizes the conditional diffusion model in the parameter space to learn the evolving pattern of classifiers during the domain-incremental training process. Specifically, the diffusion model is conditioned on the classifier weights of different historical domain (regarded as a reference point) and the prototypes of current domain, to learn the evolution from the reference point to the classifier weights of current domain (regarded as the anchor point). In addition, a domain-shared feature encoder is learned by enforcing prediction consistency among multiple classifiers, so as to mitigate the overfitting problem and restrict the evolving pattern to be reflected in the classifier as much as possible. During inference, we adopt the ensemble manner based on a great number of target domain-customized classifiers, which are cheaply obtained via the conditional diffusion model, for robust prediction. Comprehensive experiments on both synthetic and real-world datasets show the superior generalization performance of W-Diff on unseen domains in the future.
Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments
[ "Mixue Xie", "Shuang Li", "Binhui Xie", "Chi Harold Liu", "Jian Liang", "Zixun Sun", "Ke Feng", "Chengwei Zhu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2bon4HLFkN
@inproceedings{ gong2024ircm, title={{IR}-{CM}: The Fast and Universal Image Restoration Method Based on Consistency Model}, author={Xiaoxuan Gong and Jie Ma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2bon4HLFkN} }
This paper proposes a fast and general-purpose image restoration method. The key idea is to achieve few-step or even one-step inference by conducting consistency distilling or training on a specific mean-reverting stochastic differential equations. Furthermore, based on this, we propose a novel linear-nonlinear decoupling training strategy, significantly enhancing training effectiveness and surpassing consistency distillation on inference performance. This allows our method to be independent of any pre-trained checkpoint, enabling it to serve as an effective standalone image-to-image transformation model. Finally, to avoid trivial solutions and stabilize model training, we introduce a simple origin-guided loss. To validate the effectiveness of our proposed method, we conducted experiments on tasks including image deraining, denoising, deblurring, and low-light image enhancement. The experiments show that our method achieves highly competitive results with only one-step inference. And with just two-step inference, it can achieve state-of-the-art performance in low-light image enhancement. Furthermore, a number of ablation experiments demonstrate the effectiveness of the proposed training strategy. our code is available at https://github.com/XiaoxuanGong/IR-CM.
IR-CM: The Fast and Universal Image Restoration Method Based on Consistency Model
[ "Xiaoxuan Gong", "Jie Ma" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2bdSnxeQcW
@inproceedings{ yeom2024exclusively, title={Exclusively Penalized Q-learning for Offline Reinforcement Learning}, author={Junghyuk Yeom and Yonghyeon Jo and Jeongmo Kim and Sanghyeon Lee and Seungyul Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2bdSnxeQcW} }
Constraint-based offline reinforcement learning (RL) involves policy constraints or imposing penalties on the value function to mitigate overestimation errors caused by distributional shift. This paper focuses on a limitation in existing offline RL methods with penalized value function, indicating the potential for underestimation bias due to unnecessary bias introduced in the value function. To address this concern, we propose Exclusively Penalized Q-learning (EPQ), which reduces estimation bias in the value function by selectively penalizing states that are prone to inducing estimation errors. Numerical results show that our method significantly reduces underestimation bias and improves performance in various offline control tasks compared to other offline RL methods.
Exclusively Penalized Q-learning for Offline Reinforcement Learning
[ "Junghyuk Yeom", "Yonghyeon Jo", "Jeongmo Kim", "Sanghyeon Lee", "Seungyul Han" ]
NeurIPS.cc/2024/Conference
2405.14082
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=2aGcshccuV
@inproceedings{ lu2024when, title={When Is Inductive Inference Possible?}, author={Zhou Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2aGcshccuV} }
Can a physicist make only a finite number of errors in the eternal quest to uncover the law of nature? This millennium-old philosophical problem, known as inductive inference, lies at the heart of epistemology. Despite its significance to understanding human reasoning, a rigorous justification of inductive inference has remained elusive. At a high level, inductive inference asks whether one can make at most finite errors amidst an infinite sequence of observations, when deducing the correct hypothesis from a given hypothesis class. Historically, the only theoretical guarantee has been that if the hypothesis class is countable, inductive inference is possible, as exemplified by Solomonoff induction for learning Turing machines. In this paper, we provide a tight characterization of inductive inference by establishing a novel link to online learning theory. As our main result, we prove that inductive inference is possible if and only if the hypothesis class is a countable union of online learnable classes, potentially with an uncountable size, no matter the observations are adaptively chosen or iid sampled. Moreover, the same condition is also sufficient and necessary in the agnostic setting, where any hypothesis class meeting this criterion enjoys an $\tilde{O}(\sqrt{T})$ regret bound for any time step $T$, while others require an arbitrarily slow rate of regret. Our main technical tool is a novel non-uniform online learning framework, which may be of independent interest. Our main technical tool is a novel non-uniform online learning framework, which may be of independent interest.
When Is Inductive Inference Possible?
[ "Zhou Lu" ]
NeurIPS.cc/2024/Conference
2312.00170
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=2YSHEBRRol
@inproceedings{ li2024aligning, title={Aligning Individual and Collective Objectives in Multi-Agent Cooperation}, author={Yang Li and Wenhao Zhang and Jianhong Wang and Shao Zhang and Yali Du and Ying Wen and Wei Pan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2YSHEBRRol} }
Among the research topics in multi-agent learning, mixed-motive cooperation is one of the most prominent challenges, primarily due to the mismatch between individual and collective goals. The cutting-edge research is focused on incorporating domain knowledge into rewards and introducing additional mechanisms to incentivize cooperation. However, these approaches often face shortcomings such as the effort on manual design and the absence of theoretical groundings. To close this gap, we model the mixed-motive game as a differentiable game for the ease of illuminating the learning dynamics towards cooperation. More detailed, we introduce a novel optimization method named \textbf{\textit{A}}ltruistic \textbf{\textit{G}}radient \textbf{\textit{A}}djustment (\textbf{\textit{AgA}}) that employs gradient adjustments to progressively align individual and collective objectives. Furthermore, we theoretically prove that AgA effectively attracts gradients to stable fixed points of the collective objective while considering individual interests, and we validate these claims with empirical evidence. We evaluate the effectiveness of our algorithm AgA through benchmark environments for testing mixed-motive collaboration with small-scale agents such as the two-player public good game and the sequential social dilemma games, Cleanup and Harvest, as well as our self-developed large-scale environment in the game StarCraft II.
Aligning Individual and Collective Objectives in Multi-Agent Cooperation
[ "Yang Li", "Wenhao Zhang", "Jianhong Wang", "Shao Zhang", "Yali Du", "Ying Wen", "Wei Pan" ]
NeurIPS.cc/2024/Conference
2402.12416
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2YPdpWzEsF
@inproceedings{ liu2024visual, title={Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model}, author={Haogeng Liu and Quanzeng You and Xiaotian Han and Yongfei Liu and Huaibo Huang and Ran He and Hongxia Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2YPdpWzEsF} }
In the realm of Multimodal Large Language Models (MLLMs), vision-language connector plays a crucial role to link the pre-trained vision encoders with Large Language Models (LLMs). Despite its importance, the vision-language connector has been relatively less explored. In this study, we aim to propose a strong vision-language connector that enables MLLM to simultaneously achieve high accuracy and low computation cost. We first reveal the existence of the visual anchors in Vision Transformer and propose a cost-effective search algorithm to progressively extract them. Building on these findings, we introduce the Anchor Former (AcFormer), a novel vision-language connector designed to leverage the rich prior knowledge obtained from these visual anchors during pretraining, guiding the aggregation of information. Through extensive experimentation, we demonstrate that the proposed method significantly reduces computational costs by nearly two-thirds, while simultaneously outperforming baseline methods. This highlights the effectiveness and efficiency of AcFormer.
Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model
[ "Haogeng Liu", "Quanzeng You", "Xiaotian Han", "Yongfei Liu", "Huaibo Huang", "Ran He", "Hongxia Yang" ]
NeurIPS.cc/2024/Conference
2405.17815
[ "https://github.com/liuhaogeng/anchor-former" ]
https://huggingface.co/papers/2405.17815
1
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=2WQjNXZbhR
@inproceedings{ liu2024dendritic, title={Dendritic Integration Inspired Artificial Neural Networks Capture Data Correlation}, author={Chongming Liu and Jingyang Ma and Songting Li and Douglas Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2WQjNXZbhR} }
Incorporating biological neuronal properties into Artificial Neural Networks (ANNs) to enhance computational capabilities is under active investigation in the field of deep learning. Inspired by recent findings indicating that dendrites adhere to quadratic integration rule for synaptic inputs, this study explores the computational benefits of quadratic neurons. We theoretically demonstrate that quadratic neurons inherently capture correlation within structured data, a feature that grants them superior generalization abilities over traditional neurons. This is substantiated by few-shot learning experiments. Furthermore, we integrate the quadratic rule into Convolutional Neural Networks (CNNs) using a biologically plausible approach, resulting in innovative architectures—Dendritic integration inspired CNNs (Dit-CNNs). Our Dit-CNNs compete favorably with state-of-the-art models across multiple classification benchmarks, e.g., ImageNet-1K, while retaining the simplicity and efficiency of traditional CNNs. All source code are available at https://github.com/liuchongming1999/Dendritic-integration-inspired-CNN-NeurIPS-2024.
Dendritic Integration Inspired Artificial Neural Networks Capture Data Correlation
[ "Chongming Liu", "Jingyang Ma", "Songting Li", "Douglas Zhou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2V5LTfhcfd
@inproceedings{ jalaldoust2024partial, title={Partial Transportability for Domain Generalization}, author={Kasra Jalaldoust and Alexis Bellot and Elias Bareinboim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2V5LTfhcfd} }
A fundamental task in AI is providing performance guarantees for predictions made in unseen domains. In practice, there can be substantial uncertainty about the distribution of new data, and corresponding variability in the performance of existing predictors. Building on the theory of partial identification and transportability, this paper introduces new results for bounding the value of a functional of the target distribution, such as the generalization error of a classifiers, given data from source domains and assumptions about the data generating mechanisms, encoded in causal diagrams. Our contribution is to provide the first general estimation technique for transportability problems, adapting existing parameterization schemes such Neural Causal Models to encode the structural constraints necessary for cross-population inference. We demonstrate the expressiveness and consistency of this procedure and further propose a gradient-based optimization scheme for making scalable inferences in practice. Our results are corroborated with experiments.
Partial Transportability for Domain Generalization
[ "Kasra Jalaldoust", "Alexis Bellot", "Elias Bareinboim" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2UJLv3KPGO
@inproceedings{ xie2024automating, title={Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions}, author={Tian Xie and Xueru Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2UJLv3KPGO} }
As machine learning (ML) models are increasingly used in social domains to make consequential decisions about humans, they often have the power to reshape data distributions. Humans, as strategic agents, continuously adapt their behaviors in response to the learning system. As populations change dynamically, ML systems may need frequent updates to ensure high performance. However, acquiring high-quality *human-annotated* samples can be highly challenging and even infeasible in social domains. A common practice to address this issue is using the model itself to annotate unlabeled data samples. This paper investigates the long-term impacts when ML models are retrained with *model-annotated* samples when they incorporate human strategic responses. We first formalize the interactions between strategic agents and the model and then analyze how they evolve under such dynamic interactions. We find that agents are increasingly likely to receive positive decisions as the model gets retrained, whereas the proportion of agents with positive labels may decrease over time. We thus propose a *refined retraining process* to stabilize the dynamics. Last, we examine how algorithmic fairness can be affected by these retraining processes and find that enforcing common fairness constraints at every round may not benefit the disadvantaged group in the long run. Experiments on (semi-)synthetic and real data validate the theoretical findings.
Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions
[ "Tian Xie", "Xueru Zhang" ]
NeurIPS.cc/2024/Conference
2405.08027
[ "https://github.com/osu-srml/automating-data-annotation-under-strategic-human-agents" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2TktDpGqNM
@inproceedings{ traub2024overcoming, title={Overcoming Common Flaws in the Evaluation of Selective Classification Systems}, author={Jeremias Traub and Till J. Bungert and Carsten T. L{\"u}th and Michael Baumgartner and Klaus Maier-Hein and Lena Maier-hein and Paul F Jaeger}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2TktDpGqNM} }
Selective Classification, wherein models can reject low-confidence predictions, promises reliable translation of machine-learning based classification systems to real-world scenarios such as clinical diagnostics. While current evaluation of these systems typically assumes fixed working points based on pre-defined rejection thresholds, methodological progress requires benchmarking the general performance of systems akin to the $\mathrm{AUROC}$ in standard classification. In this work, we define 5 requirements for multi-threshold metrics in selective classification regarding task alignment, interpretability, and flexibility, and show how current approaches fail to meet them. We propose the Area under the Generalized Risk Coverage curve ($\mathrm{AUGRC}$), which meets all requirements and can be directly interpreted as the average risk of undetected failures. We empirically demonstrate the relevance of $\mathrm{AUGRC}$ on a comprehensive benchmark spanning 6 data sets and 13 confidence scoring functions. We find that the proposed metric substantially changes metric rankings on 5 out of the 6 data sets.
Overcoming Common Flaws in the Evaluation of Selective Classification Systems
[ "Jeremias Traub", "Till J. Bungert", "Carsten T. Lüth", "Michael Baumgartner", "Klaus Maier-Hein", "Lena Maier-hein", "Paul F Jaeger" ]
NeurIPS.cc/2024/Conference
2407.01032
[ "https://github.com/iml-dkfz/fd-shifts" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=2TXDHUqyrQ
@inproceedings{ dong2024diffuserlite, title={DiffuserLite: Towards Real-time Diffusion Planning}, author={Zibin Dong and Jianye HAO and Yifu Yuan and Fei Ni and Yitian Wang and Pengyi Li and YAN ZHENG}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2TXDHUqyrQ} }
Diffusion planning has been recognized as an effective decision-making paradigm in various domains. The capability of generating high-quality long-horizon trajectories makes it a promising research direction. However, existing diffusion planning methods suffer from low decision-making frequencies due to the expensive iterative sampling cost. To alleviate this, we introduce DiffuserLite, a super fast and lightweight diffusion planning framework, which employs a planning refinement process (PRP) to generate coarse-to-fine-grained trajectories, significantly reducing the modeling of redundant information and leading to notable increases in decision-making frequency. Our experimental results demonstrate that DiffuserLite achieves a decision-making frequency of $122.2$Hz ($112.7$x faster than predominant frameworks) and reaches state-of-the-art performance on D4RL, Robomimic, and FinRL benchmarks. In addition, DiffuserLite can also serve as a flexible plugin to increase the decision-making frequency of other diffusion planning algorithms, providing a structural design reference for future works. More details and visualizations are available at https://diffuserlite.github.io/.
DiffuserLite: Towards Real-time Diffusion Planning
[ "Zibin Dong", "Jianye HAO", "Yifu Yuan", "Fei Ni", "Yitian Wang", "Pengyi Li", "YAN ZHENG" ]
NeurIPS.cc/2024/Conference
2401.15443
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2RS0fL7Eet
@inproceedings{ chen2024stochastic, title={Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data}, author={Xuxing Chen and Abhishek Roy and Yifan Hu and Krishna Balasubramanian}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2RS0fL7Eet} }
We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional stochastic optimization problem. In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches thereby providing a fully online approach for performing instrumental variable regression with streaming data. When the true model is linear, we derive rates of convergence in expectation, that are of order $\mathcal{O}(\log T/T)$ and $\mathcal{O}(1/T^{1-\epsilon})$ for any $\epsilon>0$, respectively under the availability of two-sample and one-sample oracles respectively. Importantly, under the availability of the two-sample oracle, the aforementioned rate is actually agnostic to the relationship between confounder and the instrumental variable demonstrating the flexibility of the proposed approach in alleviating the need for explicit model assumptions required in recent works based on reformulating the problem as min-max optimization problems. Experimental validation is provided to demonstrate the advantages of the proposed algorithms over classical approaches like the 2SLS method.
Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data
[ "Xuxing Chen", "Abhishek Roy", "Yifan Hu", "Krishna Balasubramanian" ]
NeurIPS.cc/2024/Conference
2405.19463
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2QvCOFw058
@inproceedings{ jia2024globally, title={Globally Q-linear Gauss-Newton Method for Overparameterized Non-convex Matrix Sensing}, author={Xixi Jia and Fangchen FENG and Deyu Meng and Defeng Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2QvCOFw058} }
This paper focuses on the optimization of overparameterized, non-convex low-rank matrix sensing (LRMS)—an essential component in contemporary statistics and machine learning. Recent years have witnessed significant breakthroughs in first-order methods, such as gradient descent, for tackling this non-convex optimization problem. However, the presence of numerous saddle points often prolongs the time required for gradient descent to overcome these obstacles. Moreover, overparameterization can markedly decelerate gradient descent methods, transitioning its convergence rate from linear to sub-linear. In this paper, we introduce an approximated Gauss-Newton (AGN) method for tackling the non-convex LRMS problem. Notably, AGN incurs a computational cost comparable to gradient descent per iteration but converges much faster without being slowed down by saddle points. We prove that, despite the non-convexity of the objective function, AGN achieves Q-linear convergence from random initialization to the global optimal solution. The global Q-linear convergence of AGN represents a substantial enhancement over the convergence of the existing methods for the overparameterized non-convex LRMS. The code for this paper is available at \url{https://github.com/hsijiaxidian/AGN}.
Globally Q-linear Gauss-Newton Method for Overparameterized Non-convex Matrix Sensing
[ "Xixi Jia", "Fangchen FENG", "Deyu Meng", "Defeng Sun" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2NfBBpbN9x
@inproceedings{ naiman2024utilizing, title={Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series}, author={Ilan Naiman and Nimrod Berman and Itai Pemper and Idan Arbiv and Gal Fadlon and Omri Azencot}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2NfBBpbN9x} }
Lately, there has been a surge in interest surrounding generative modeling of time series data. Most existing approaches are designed either to process short sequences or to handle long-range sequences. This dichotomy can be attributed to gradient issues with recurrent networks, computational costs associated with transformers, and limited expressiveness of state space models. Towards a unified generative model for varying-length time series, we propose in this work to transform sequences into images. By employing invertible transforms such as the delay embedding and the short-time Fourier transform, we unlock three main advantages: i) We can exploit advanced diffusion vision models; ii) We can remarkably process short- and long-range inputs within the same framework; and iii) We can harness recent and established tools proposed in the time series to image literature. We validate the effectiveness of our method through a comprehensive evaluation across multiple tasks, including unconditional generation, interpolation, and extrapolation. We show that our approach achieves consistently state-of-the-art results against strong baselines. In the unconditional generation tasks, we show remarkable mean improvements of $58.17$% over previous diffusion models in the short discriminative score and $132.61$% in the (ultra-)long classification scores. Code is at https://github.com/azencot-group/ImagenTime.
Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series
[ "Ilan Naiman", "Nimrod Berman", "Itai Pemper", "Idan Arbiv", "Gal Fadlon", "Omri Azencot" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2NKumsITFw
@inproceedings{ guo2024learning, title={Learning from Noisy Labels via Conditional Distributionally Robust Optimization}, author={Hui Guo and Grace Yi and Boyu Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2NKumsITFw} }
While crowdsourcing has emerged as a practical solution for labeling large datasets, it presents a significant challenge in learning accurate models due to noisy labels from annotators with varying levels of expertise. Existing methods typically estimate the true label posterior, conditioned on the instance and noisy annotations, to infer true labels or adjust loss functions. These estimates, however, often overlook potential misspecification in the true label posterior, which can degrade model performances, especially in high-noise scenarios. To address this issue, we investigate learning from noisy annotations with an estimated true label posterior through the framework of conditional distributionally robust optimization (CDRO). We propose formulating the problem as minimizing the worst-case risk within a distance-based ambiguity set centered around a reference distribution. By examining the strong duality of the formulation, we derive upper bounds for the worst-case risk and develop an analytical solution for the dual robust risk for each data point. This leads to a novel robust pseudo-labeling algorithm that leverages the likelihood ratio test to construct a pseudo-empirical distribution, providing a robust reference probability distribution in CDRO. Moreover, to devise an efficient algorithm for CDRO, we derive a closed-form expression for the empirical robust risk and the optimal Lagrange multiplier of the dual problem, facilitating a principled balance between robustness and model fitting. Our experimental results on both synthetic and real-world datasets demonstrate the superiority of our method.
Learning from Noisy Labels via Conditional Distributionally Robust Optimization
[ "Hui Guo", "Grace Yi", "Boyu Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2LuSHTFWzK
@inproceedings{ laber2024on, title={On the cohesion and separability of average-link for hierarchical agglomerative clustering}, author={Eduardo Sany Laber and Miguel A. Batista}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2LuSHTFWzK} }
Average-link is widely recognized as one of the most popular and effective methods for building hierarchical agglomerative clustering. The available theoretical analyses show that this method has a much better approximation than other popular heuristics, as single-linkage and complete-linkage, regarding variants of Dasgupta's cost function [STOC 2016]. However, these analyses do not separate average-link from a random hierarchy and they are not appealing for metric spaces since every hierarchical clustering has a $1/2$ approximation with regard to the variant of Dasgupta's function that is employed for dissimilarity measures [Moseley and Yang 2020]. In this paper, we present a comprehensive study of the performance of \avglink \, in metric spaces, regarding several natural criteria that capture separability and cohesion, and are more interpretable than Dasgupta's cost function and its variants. We also present experimental results with real datasets that, together with our theoretical analyses, suggest that average-link is a better choice than other related methods when both cohesion and separability are important goals.
On the cohesion and separability of average-link for hierarchical agglomerative clustering
[ "Eduardo Sany Laber", "Miguel A. Batista" ]
NeurIPS.cc/2024/Conference
2411.05097
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2LctgfN6Ty
@inproceedings{ melnyk2024distributional, title={Distributional Preference Alignment of {LLM}s via Optimal Transport}, author={Igor Melnyk and Youssef Mroueh and Brian Belgodere and Mattia Rigotti and Apoorva Nitsure and Mikhail Yurochkin and Kristjan Greenewald and Jiri Navratil and Jarret Ross}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2LctgfN6Ty} }
Current LLM alignment techniques use pairwise human preferences at a sample level, and as such, they do not imply an alignment on the distributional level. We propose in this paper Alignment via Optimal Transport (AOT), a novel method for distributional preference alignment of LLMs. AOT aligns LLMs on unpaired preference data by making the reward distribution of the positive samples stochastically dominant in the first order on the distribution of negative samples. We introduce a convex relaxation of this first-order stochastic dominance and cast it as an optimal transport problem with a smooth and convex cost. Thanks to the one-dimensional nature of the resulting optimal transport problem and the convexity of the cost, it has a closed-form solution via sorting on empirical measures. We fine-tune LLMs with this AOT objective, which enables alignment by penalizing the violation of the stochastic dominance of the reward distribution of the positive samples on the reward distribution of the negative samples. We analyze the sample complexity of AOT by considering the dual of the OT problem and show that it converges at the parametric rate. Empirically, we show on a diverse set of alignment datasets and LLMs that AOT leads to state-of-the-art models in the 7B family of models when evaluated with Open LLM Benchmarks and AlpacaEval. Code for $\mathsf{AOT}$ is available in the Hugging Face TRL library \url{https://ibm.biz/AOT_TRL}.
Distributional Preference Alignment of LLMs via Optimal Transport
[ "Igor Melnyk", "Youssef Mroueh", "Brian Belgodere", "Mattia Rigotti", "Apoorva Nitsure", "Mikhail Yurochkin", "Kristjan Greenewald", "Jiri Navratil", "Jarret Ross" ]
NeurIPS.cc/2024/Conference
2406.05882
[ "" ]
https://huggingface.co/papers/2406.05882
0
0
0
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=2LRZhbTDtA
@inproceedings{ zhang2024not, title={Not Just Object, But State: Compositional Incremental Learning without Forgetting}, author={Yanyi Zhang and Binglin Qiu and Qi Jia and Yu Liu and Ran He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2LRZhbTDtA} }
Most incremental learners excessively prioritize object classes while neglecting various kinds of states (e.g. color and material) attached to the objects. As a result, they are limited in the ability to model state-object compositionality accurately. To remedy this limitation, we propose a novel task called Compositional Incremental Learning (composition-IL), which enables the model to recognize a variety of state-object compositions in an incremental learning fashion. Since the lack of suitable datasets, we re-organize two existing datasets and make them tailored for composition-IL. Then, we propose a prompt-based Composition Incremental Learner (CompILer), to overcome the ambiguous composition boundary. Specifically, we exploit multi-pool prompt learning, and ensure the inter-pool prompt discrepancy and intra-pool prompt diversity. Besides, we devise object-injected state prompting which injects object prompts to guide the selection of state prompts. Furthermore, we fuse the selected prompts by a generalized-mean strategy, to eliminate irrelevant information learned in the prompts. Extensive experiments on two datasets exhibit state-of-the-art performance achieved by CompILer. Code and datasets are available at: https://github.com/Yanyi-Zhang/CompILer.
Not Just Object, But State: Compositional Incremental Learning without Forgetting
[ "Yanyi Zhang", "Binglin Qiu", "Qi Jia", "Yu Liu", "Ran He" ]
NeurIPS.cc/2024/Conference
2411.01739
[ "https://github.com/Yanyi-Zhang/CompILer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2KuZHYykkq
@inproceedings{ luo2024minisequence, title={Mini-Sequence Transformers: Optimizing Intermediate Memory for Long Sequences Training}, author={Cheng Luo and Jiawei Zhao and Zhuoming Chen and Beidi Chen and Anima Anandkumar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2KuZHYykkq} }
We introduce Mini-Sequence Transformer (MsT), a simple and effective methodology for highly efficient and accurate LLM training with extremely long sequences. MsT partitions input sequences and iteratively processes mini-sequences to reduce intermediate memory usage. Integrated with activation recomputation, it enables significant memory savings in both forward and backward passes. In experiments with the Llama3-8B model, with MsT, we measure no degradation in throughput or convergence even with 12x longer sequences than standard implementations. MsT is fully general, implementation-agnostic, and requires minimal code changes to integrate with existing LLM training frameworks. Integrated with the huggingface library, MsT successfully extends the maximum context length of Qwen, Mistral, and Gemma-2 by 12-24x.
Mini-Sequence Transformers: Optimizing Intermediate Memory for Long Sequences Training
[ "Cheng Luo", "Jiawei Zhao", "Zhuoming Chen", "Beidi Chen", "Anima Anandkumar" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2Inwtjvyx8
@inproceedings{ wei2024revisiting, title={Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection}, author={Hui Wei and Zhixiang Wang and Kewei Zhang and Jiaqi Hou and Yuanwei Liu and Hao Tang and Zheng Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=2Inwtjvyx8} }
Physical adversarial attacks can deceive deep neural networks (DNNs), leading to erroneous predictions in real-world scenarios. To uncover potential security risks, attacking the safety-critical task of person detection has garnered significant attention. However, we observe that existing attack methods overlook the pivotal role of the camera, involving capturing real-world scenes and converting them into digital images, in the physical adversarial attack workflow. This oversight leads to instability and challenges in reproducing these attacks. In this work, we revisit patch-based attacks against person detectors and introduce a camera-agnostic physical adversarial attack to mitigate this limitation. Specifically, we construct a differentiable camera Image Signal Processing (ISP) proxy network to compensate for the physical-to-digital transition gap. Furthermore, the camera ISP proxy network serves as a defense module, forming an adversarial optimization framework with the attack module. The attack module optimizes adversarial patches to maximize effectiveness, while the defense module optimizes the conditional parameters of the camera ISP proxy network to minimize attack effectiveness. These modules engage in an adversarial game, enhancing cross-camera stability. Experimental results demonstrate that our proposed Camera-Agnostic Patch (CAP) attack effectively conceals persons from detectors across various imaging hardware, including two distinct cameras and four smartphones.
Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection
[ "Hui Wei", "Zhixiang Wang", "Kewei Zhang", "Jiaqi Hou", "Yuanwei Liu", "Hao Tang", "Zheng Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster