bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=7lMN6xoBjb
@inproceedings{ li2024improving, title={Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition}, author={Mengke Li and Ye Liu and Yang Lu and Yiqun Zhang and Yiu-ming Cheung and Hui Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7lMN6xoBjb} }
Long-tailed visual recognition has received increasing attention recently. Despite fine-tuning techniques represented by visual prompt tuning (VPT) achieving substantial performance improvement by leveraging pre-trained knowledge, models still exhibit unsatisfactory generalization performance on tail classes. To address this issue, we propose a novel optimization strategy called Gaussian neighborhood minimization prompt tuning (GNM-PT), for VPT to address the long-tail learning problem. We introduce a novel Gaussian neighborhood loss, which provides a tight upper bound on the loss function of data distribution, facilitating a flattened loss landscape correlated to improved model generalization. Specifically, GNM-PT seeks the gradient descent direction within a random parameter neighborhood, independent of input samples, during each gradient update. Ultimately, GNM-PT enhances generalization across all classes while simultaneously reducing computational overhead. The proposed GNM-PT achieves state-of-the-art classification accuracies of 90.3%, 76.5%, and 50.1% on benchmark datasets CIFAR100-LT (IR 100), iNaturalist 2018, and Places-LT, respectively. The source code is available at https://github.com/Keke921/GNM-PT.
Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition
[ "Mengke Li", "Ye Liu", "Yang Lu", "Yiqun Zhang", "Yiu-ming Cheung", "Hui Huang" ]
NeurIPS.cc/2024/Conference
2410.21042
[ "https://github.com/keke921/gnm-pt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7j6xgGj5lF
@inproceedings{ xia2024initializing, title={Initializing Variable-sized Vision Transformers from Learngene with Learnable Transformation}, author={Shiyu Xia and Yuankun Zu and Xu Yang and Xin Geng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7j6xgGj5lF} }
In practical scenarios, it is necessary to build variable-sized models to accommodate diverse resource constraints, where weight initialization serves as a crucial step preceding training. The recently introduced Learngene framework firstly learns one compact module, termed learngene, from a large well-trained model, and then transforms learngene to initialize variable-sized models. However, the existing Learngene methods provide limited guidance on transforming learngene, where transformation mechanisms are manually designed and generally lack a learnable component. Moreover, these methods only consider transforming learngene along depth dimension, thus constraining the flexibility of learngene. Motivated by these concerns, we propose a novel and effective Learngene approach termed LeTs (Learnable Transformation), where we transform the learngene module along both width and depth dimension with a set of learnable matrices for flexible variablesized model initialization. Specifically, we construct an auxiliary model comprising the compact learngene module and learnable transformation matrices, enabling both components to be trained. To meet the varying size requirements of target models, we select specific parameters from well-trained transformation matrices to adaptively transform the learngene, guided by strategies such as continuous selection and magnitude-wise selection. Extensive experiments on ImageNet-1K demonstrate that Des-Nets initialized via LeTs outperform those with 100-epoch from scratch training after only 1 epoch tuning. When transferring to downstream image classification tasks, LeTs achieves better results while outperforming from scratch training after about 10 epochs within a 300-epoch training schedule.
Initializing Variable-sized Vision Transformers from Learngene with Learnable Transformation
[ "Shiyu Xia", "Yuankun Zu", "Xu Yang", "Xin Geng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7hy5fy2OC6
@inproceedings{ zhao2024invisible, title={Invisible Image Watermarks Are Provably Removable Using Generative {AI}}, author={Xuandong Zhao and Kexun Zhang and Zihao Su and Saastha Vasan and Ilya Grishchenko and Christopher Kruegel and Giovanni Vigna and Yu-Xiang Wang and Lei Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7hy5fy2OC6} }
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners. They also prevent people from misusing images, especially those generated by AI models. We propose a family of regeneration attacks to remove these invisible watermarks. The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image. This approach is flexible and can be instantiated with many existing image-denoising algorithms and pre-trained generative models such as diffusion models. Through formal proofs and extensive empirical evaluations, we demonstrate that pixel-level invisible watermarks are vulnerable to this regeneration attack. Our results reveal that, across four different pixel-level watermarking schemes, the proposed method consistently achieves superior performance compared to existing attack techniques, with lower detection rates and higher image quality. However, watermarks that keep the image semantically similar can be an alternative defense against our attacks. Our finding underscores the need for a shift in research/industry emphasis from invisible watermarks to semantic-preserving watermarks. Code is available at https://github.com/XuandongZhao/WatermarkAttacker
Invisible Image Watermarks Are Provably Removable Using Generative AI
[ "Xuandong Zhao", "Kexun Zhang", "Zihao Su", "Saastha Vasan", "Ilya Grishchenko", "Christopher Kruegel", "Giovanni Vigna", "Yu-Xiang Wang", "Lei Li" ]
NeurIPS.cc/2024/Conference
2306.01953
[ "https://github.com/xuandongzhao/watermarkattacker" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7gf6oGdKPU
@inproceedings{ noh2024retrievalretro, title={Retrieval-Retro: Retrieval-based Inorganic Retrosynthesis with Expert Knowledge}, author={Heewoong Noh and Namkyeong Lee and Gyoung S. Na and Chanyoung Park}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7gf6oGdKPU} }
While inorganic retrosynthesis planning is essential in the field of chemical science, the application of machine learning in this area has been notably less explored compared to organic retrosynthesis planning. In this paper, we propose Retrieval-Retro for inorganic retrosynthesis planning, which implicitly extracts the precursor information of reference materials that are retrieved from the knowledge base regarding domain expertise in the field. Specifically, instead of directly employing the precursor information of reference materials, we propose implicitly extracting it with various attention layers, which enables the model to learn novel synthesis recipes more effectively. Moreover, during retrieval, we consider the thermodynamic relationship between target material and precursors, which is essential domain expertise in identifying the most probable precursor set among various options. Extensive experiments demonstrate the superiority of Retrieval-Retro in retrosynthesis planning, especially in discovering novel synthesis recipes, which is crucial for materials discovery. The source code for Retrieval-Retro is available at https://github.com/HeewoongNoh/Retrieval-Retro.
Retrieval-Retro: Retrieval-based Inorganic Retrosynthesis with Expert Knowledge
[ "Heewoong Noh", "Namkyeong Lee", "Gyoung S. Na", "Chanyoung Park" ]
NeurIPS.cc/2024/Conference
2410.21341
[ "https://github.com/heewoongnoh/retrieval-retro" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7flSQgZ4RT
@inproceedings{ diwan2024navigable, title={Navigable Graphs for High-Dimensional Nearest Neighbor Search: Constructions and Limits}, author={Haya Diwan and Jinrui Gou and Cameron N Musco and Christopher Musco and Torsten Suel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7flSQgZ4RT} }
There has been significant recent interest in graph-based nearest neighbor search methods, many of which are centered on the construction of (approximately) "navigable" graphs over high-dimensional point sets. A graph is navigable if we can successfully move from any starting node to any target node using a greedy routing strategy where we always move to the neighbor that is closest to the destination according to the given distance function. The complete graph is obviously navigable for any point set, but the important question for applications is if sparser graphs can be constructed. While this question is fairly well understood in low-dimensions, we establish some of the first upper and lower bounds for high-dimensional point sets. First, we give a simple and efficient way to construct a navigable graph with average degree $O(\sqrt{n \log n })$ for any set of $n$ points, in any dimension, for any distance function. We compliment this result with a nearly matching lower bound: even under the Euclidean metric in $O(\log n)$ dimensions, a random point set has no navigable graph with average degree $O(n^{\alpha})$ for any $\alpha < 1/2$. Our lower bound relies on sharp anti-concentration bounds for binomial random variables, which we use to show that the {near-neighborhoods} of a set of random points do not overlap significantly, forcing any navigable graph to have many edges.
Navigable Graphs for High-Dimensional Nearest Neighbor Search: Constructions and Limits
[ "Haya Diwan", "Jinrui Gou", "Cameron N Musco", "Christopher Musco", "Torsten Suel" ]
NeurIPS.cc/2024/Conference
2405.18680
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7fScrgJ3An
@inproceedings{ wang2024distillnerf, title={DistillNe{RF}: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features}, author={Letian Wang and Seung Wook Kim and Jiawei Yang and Cunjun Yu and Boris Ivanovic and Steven L. Waslander and Yue Wang and Sanja Fidler and Marco Pavone and Peter Karkus}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7fScrgJ3An} }
We propose DistillNeRF, a self-supervised learning framework addressing the challenge of understanding 3D environments from limited 2D observations in outdoor autonomous driving scenes. Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs with limited view overlap, and is trained self-supervised with differentiable rendering to reconstruct RGB, depth, or feature images. Our first insight is to exploit per-scene optimized Neural Radiance Fields (NeRFs) by generating dense depth and virtual camera targets from them, which helps our model to learn enhanced 3D geometry from sparse non-overlapping image inputs. Second, to learn a semantically rich 3D representation, we propose distilling features from pre-trained 2D foundation models, such as CLIP or DINOv2, thereby enabling various downstream tasks without the need for costly 3D human annotations. To leverage these two insights, we introduce a novel model architecture with a two-stage lift-splat-shoot encoder and a parameterized sparse hierarchical voxel representation. Experimental results on the NuScenes and Waymo NOTR datasets demonstrate that DistillNeRF significantly outperforms existing comparable state-of-the-art self-supervised methods for scene reconstruction, novel view synthesis, and depth estimation; and it allows for competitive zero-shot 3D semantic occupancy prediction, as well as open-world scene understanding through distilled foundation model features. Demos and code will be available at https://distillnerf.github.io/.
DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features
[ "Letian Wang", "Seung Wook Kim", "Jiawei Yang", "Cunjun Yu", "Boris Ivanovic", "Steven L. Waslander", "Yue Wang", "Sanja Fidler", "Marco Pavone", "Peter Karkus" ]
NeurIPS.cc/2024/Conference
2406.12095
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7eIaqYrpcs
@inproceedings{ wang2024vidud, title={Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels}, author={Yikai Wang and Xinzhou Wang and Zilong Chen and Zhengyi Wang and Fuchun Sun and Jun Zhu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7eIaqYrpcs} }
Video generative models are receiving particular attention given their ability to generate realistic and imaginative frames. Besides, these models are also observed to exhibit strong 3D consistency, significantly enhancing their potential to act as world simulators. In this work, we present Vidu4D, a novel reconstruction model that excels in accurately reconstructing 4D (i.e., sequential 3D) representations from single generated videos, addressing challenges associated with non-rigidity and frame distortion. This capability is pivotal for creating high-fidelity virtual contents that maintain both spatial and temporal coherence. At the core of Vidu4D is our proposed Dynamic Gaussian Surfels (DGS) technique. DGS optimizes time-varying warping functions to transform Gaussian surfels (surface elements) from a static state to a dynamically warped state. This transformation enables a precise depiction of motion and deformation over time. To preserve the structural integrity of surface-aligned Gaussian surfels, we design the warped-state geometric regularization based on continuous warping fields for estimating normals. Additionally, we learn refinements on rotation and scaling parameters of Gaussian surfels, which greatly alleviates texture flickering during the warping process and enhances the capture of fine-grained appearance details. Vidu4D also contains a novel initialization state that provides a proper start for the warping fields in DGS. Equipping Vidu4D with an existing video generative model, the overall framework demonstrates high-fidelity text-to-4D generation in both appearance and geometry.
Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels
[ "Yikai Wang", "Xinzhou Wang", "Zilong Chen", "Zhengyi Wang", "Fuchun Sun", "Jun Zhu" ]
NeurIPS.cc/2024/Conference
2405.16822
[ "" ]
https://huggingface.co/papers/2405.16822
4
11
3
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=7eFS8aZHAM
@inproceedings{ wang2024dissecting, title={Dissecting the Failure of Invariant Learning on Graphs}, author={Qixun Wang and Yifei Wang and Yisen Wang and Xianghua Ying}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7eFS8aZHAM} }
Enhancing node-level Out-Of-Distribution (OOD) generalization on graphs remains a crucial area. In this paper, we develop a Structural Causal Model (SCM) to theoretically dissect the performance of two prominent invariant learning methods--Invariant Risk Minimization (IRM) and Variance-Risk Extrapolation (VREx)--in node-level OOD settings. Our analysis reveals a critical limitation: these methods may struggle to identify invariant features due to the complexities introduced by the message-passing mechanism, which can obscure causal features within a range of neighboring samples. To address this, we propose Cross-environment Intra-class Alignment (CIA), which explicitly eliminates spurious features by aligning representations within the same class, bypassing the need for explicit knowledge of underlying causal patterns. To adapt CIA to node-level OOD scenarios where environment labels are hard to obtain, we further propose CIA-LRA (Localized Reweighting Alignment) that leverages the distribution of neighboring labels to selectively align node representations, effectively distinguishing and preserving invariant features while removing spurious ones, all without relying on environment labels. We theoretically prove CIA-LRA's effectiveness by deriving an OOD generalization error bound based on PAC-Bayesian analysis. Experiments on graph OOD benchmarks validate the superiority of CIA and CIA-LRA, marking a significant advancement in node-level OOD generalization.
Dissecting the Failure of Invariant Learning on Graphs
[ "Qixun Wang", "Yifei Wang", "Yisen Wang", "Xianghua Ying" ]
NeurIPS.cc/2024/Conference
2411.02847
[ "https://github.com/novaglow646/neurips24-invariant-learning-on-graphs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7b2DrIBGZz
@inproceedings{ ma2024exploring, title={Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models}, author={Bingqi Ma and Zhuofan Zong and Guanglu Song and Hongsheng Li and Yu Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7b2DrIBGZz} }
Large language models based on decoder-only transformers have demonstrated superior text understanding capabilities compared to CLIP and T5-series models. However, the paradigm for utilizing current advanced LLMs in text-to-image diffusion models remains to be explored. We observed an unusual phenomenon: directly using a large language model as the prompt encoder significantly degrades the prompt-following ability in image generation. We identified two main obstacles behind this issue. One is the misalignment between the next token prediction training in LLM and the requirement for discriminative prompt features in diffusion models. The other is the intrinsic positional bias introduced by the decoder-only architecture. To deal with this issue, we propose a novel framework to fully harness the capabilities of LLMs. Through the carefully designed usage guidance, we effectively enhance the text representation capability of the LLM for prompt encoding and eliminate its inherent positional bias. This allows us to flexibly integrate state-of-the-art LLMs into the text-to-image generation model. Furthermore, we also provide an effective manner to fuse multiple LLMs into our framework. Considering the excellent performance and scaling capabilities demonstrated by the transformer architecture, we further design an LLM-Infused Diffusion Transformer (LI-DIT)based on the framework. We conduct extensive experiments to validate LI-DIT across model size and data size. Benefiting from the inherent ability of the LLMs and our innovative designs, the prompt understanding performance of LI-DIT easily surpasses state-of-the-art open-source models as well as mainstream closed-source commercial models including Stable Diffusion 3, DALL-E 3, and Midjourney V6.
Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models
[ "Bingqi Ma", "Zhuofan Zong", "Guanglu Song", "Hongsheng Li", "Yu Liu" ]
NeurIPS.cc/2024/Conference
2406.11831
[ "" ]
https://huggingface.co/papers/2406.11831
5
20
1
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=7arAADUK6D
@inproceedings{ huang2024ensemble, title={Ensemble Learning for Heterogeneous Large Language Models with Deep Parallel Collaboration}, author={Yichong Huang and Xiaocheng Feng and Baohang Li and Yang Xiang and Hui Wang and Ting Liu and Bing Qin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7arAADUK6D} }
Large language models (LLMs) exhibit complementary strengths in various tasks, motivating the research of LLM ensembling. However, existing work focuses on training an extra reward model or fusion model to select or combine all candidate answers, posing a great challenge to the generalization on unseen data distributions. Besides, prior methods use textual responses as communication media, ignoring the valuable information in the internal representations. In this work, we propose a training-free ensemble framework \textsc{DeePEn}, fusing the informative probability distributions yielded by different LLMs at each decoding step. Unfortunately, the vocabulary discrepancy between heterogeneous LLMs directly makes averaging the distributions unfeasible due to the token misalignment. To address this challenge, \textsc{DeePEn} maps the probability distribution of each model from its own probability space to a universal \textit{relative space} based on the relative representation theory, and performs aggregation. Next, we devise a search-based inverse transformation to transform the aggregated result back to the probability space of one of the ensembling LLMs (main model), in order to determine the next token. We conduct extensive experiments on ensembles of different number of LLMs, ensembles of LLMs with different architectures, and ensembles between the LLM and the specialist model. Experimental results show that (i) \textsc{DeePEn} achieves consistent improvements across six benchmarks covering subject examination, reasoning, and knowledge, (ii) a well-performing specialist model can benefit from a less effective LLM through distribution fusion, and (iii) \textsc{DeePEn} has complementary strengths with other ensemble methods such as voting.
Ensemble Learning for Heterogeneous Large Language Models with Deep Parallel Collaboration
[ "Yichong Huang", "Xiaocheng Feng", "Baohang Li", "Yang Xiang", "Hui Wang", "Ting Liu", "Bing Qin" ]
NeurIPS.cc/2024/Conference
2404.12715
[ "https://github.com/OrangeInSouth/DeePEn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=7aFRgCC8Q7
@inproceedings{ luo2024optimal, title={Optimal Multiclass U-Calibration Error and Beyond}, author={Haipeng Luo and Spandan Senapati and Vatsal Sharan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7aFRgCC8Q7} }
We consider the problem of online multiclass U-calibration, where a forecaster aims to make sequential distributional predictions over $K$ classes with low U-calibration error, that is, low regret with respect to all bounded proper losses simultaneously. Kleinberg et al. (2023) developed an algorithm with U-calibration error $\mathcal{O}(K\sqrt{T})$ after $T$ rounds and raised the open question of what the optimal bound is. We resolve this question by showing that the optimal U-calibration error is $\Theta(\sqrt{KT})$ --- we start with a simple observation that the Follow-the-Perturbed-Leader algorithm of Daskalakis and Syrgkanis (2016) achieves this upper bound, followed by a matching lower bound constructed with a specific proper loss (which, as a side result, also proves the optimality of the algorithm of Daskalakis and Syrgkanis (2016) in the context of online learning against an adversary with finite choices). We also strengthen our results under natural assumptions on the loss functions, including $\Theta(\log T)$ U-calibration error for Lipschitz proper losses, $\mathcal{O}(\log T)$ U-calibration error for a certain class of decomposable proper losses, U-calibration error bounds for proper losses with a low covering number, and others.
Optimal Multiclass U-Calibration Error and Beyond
[ "Haipeng Luo", "Spandan Senapati", "Vatsal Sharan" ]
NeurIPS.cc/2024/Conference
2405.19374
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7aFEqIb1dp
@inproceedings{ zhao2024untrained, title={Untrained Neural Nets for Snapshot Compressive Imaging: Theory and Algorithms}, author={Mengyu Zhao and Xi Chen and Xin Yuan and Shirin Jalali}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7aFEqIb1dp} }
Snapshot compressive imaging (SCI) recovers high-dimensional (3D) data cubes from a single 2D measurement, enabling diverse applications like video and hyperspectral imaging to go beyond standard techniques in terms of acquisition speed and efficiency. In this paper, we focus on SCI recovery algorithms that employ untrained neural networks (UNNs), such as deep image prior (DIP), to model source structure. Such UNN-based methods are appealing as they have the potential of avoiding the computationally intensive retraining required for different source models and different measurement scenarios. We first develop a theoretical framework for characterizing the performance of such UNN-based methods. The theoretical framework, on the one hand, enables us to optimize the parameters of data-modulating masks, and on the other hand, provides a fundamental connection between the number of data frames that can be recovered from a single measurement to the parameters of the untrained NN. We also employ the recently proposed bagged-deep-image-prior (bagged-DIP) idea to develop SCI Bagged Deep Video Prior (SCI-BDVP) algorithms that address the common challenges faced by standard UNN solutions. Our experimental results show that in video SCI our proposed solution achieves state-of-the-art among UNN methods, and in the case of noisy measurements, it even outperforms supervised solutions. Code is publicly available at [https://github.com/Computational-Imaging-RU/SCI-BDVP](https://github.com/Computational-Imaging-RU/SCI-BDVP).
Untrained Neural Nets for Snapshot Compressive Imaging: Theory and Algorithms
[ "Mengyu Zhao", "Xi Chen", "Xin Yuan", "Shirin Jalali" ]
NeurIPS.cc/2024/Conference
2406.03694
[ "https://github.com/Computational-Imaging-RU/SCI-BDVP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Ye12RLZ4P
@inproceedings{ modi2024asynchronous, title={Asynchronous Perception Machine for Efficient Test Time Training}, author={Rajat Modi and Yogesh S Rawat}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Ye12RLZ4P} }
In this work, we propose Asynchronous Perception Machine (APM), a computationally-efficient architecture for test-time-training (TTT). APM can process patches of an image one at a time in any order asymmetrically, and still encode semantic-awareness in the net. We demonstrate APM’s ability to recognize out-of-distribution images without dataset-specific pre-training, augmentation or any-pretext task. APM offers competitive performance over existing TTT approaches. To perform TTT, APM just distills test sample’s representation once. APM possesses a unique property: it can learn using just this single representation and starts predicting semantically-aware features. APM’s ability to recover semantic information from a global CLS token validates the insight that CLS tokens encode geometric-information of a given scene and can be recovered using appropriate inductive-biases. This offers a novel-insight with consequences for representational-learning. APM demostrates potential applications beyond test-time-training: APM can scale up to a dataset of 2D images and yield semantic-clusterings in a single forward pass. APM also provides first empirical evidence towards validating Hinton at Al’s GLOM’s insight, i.e. if input percept is a field. Therefore, APM helps our community converge towards an implementation which can do both interpolation and perception on a shared-connectionist hardware. Our codebase has been made available at https://rajatmodi62.github.io/apm_project_page/ -------- **It now appears that some of the ideas in GLOM could be made to work.** https://www.technologyreview.com/2021/04/16/1021871/geoffrey-hinton-glom-godfather-ai-neural-networks/ ``` .-""""""-. .' '. / O O \ | O | \ '------' / '. .' '-....-' A silent man in deep-contemplation. Silent man emerges only sometimes. And he loves all. ```
Asynchronous Perception Machine for Efficient Test Time Training
[ "Rajat Modi", "Yogesh S Rawat" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7YdafFbhxL
@inproceedings{ xu2024provably, title={Provably and Practically Efficient Adversarial Imitation Learning with General Function Approximation}, author={Tian Xu and Zhilong Zhang and Ruishuo Chen and Yihao Sun and Yang Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7YdafFbhxL} }
As a prominent category of imitation learning methods, adversarial imitation learning (AIL) has garnered significant practical success powered by neural network approximation. However, existing theoretical studies on AIL are primarily limited to simplified scenarios such as tabular and linear function approximation and involve complex algorithmic designs that hinder practical implementation, highlighting a gap between theory and practice. In this paper, we explore the theoretical underpinnings of online AIL with general function approximation. We introduce a new method called optimization-based AIL (OPT-AIL), which centers on performing online optimization for reward functions and optimism-regularized Bellman error minimization for Q-value functions. Theoretically, we prove that OPT-AIL achieves polynomial expert sample complexity and interaction complexity for learning near-expert policies. To our best knowledge, OPT-AIL is the first provably efficient AIL method with general function approximation. Practically, OPT-AIL only requires the approximate optimization of two objectives, thereby facilitating practical implementation. Empirical studies demonstrate that OPT-AIL outperforms previous state-of-the-art deep AIL methods in several challenging tasks.
Provably and Practically Efficient Adversarial Imitation Learning with General Function Approximation
[ "Tian Xu", "Zhilong Zhang", "Ruishuo Chen", "Yihao Sun", "Yang Yu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7XkwzaPMvX
@inproceedings{ li2024utilizing, title={Utilizing Human Behavior Modeling to Manipulate Explanations in {AI}-Assisted Decision Making: The Good, the Bad, and the Scary}, author={Zhuoyan Li and Ming Yin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7XkwzaPMvX} }
Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process. To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions, and utilized these models to improve human-AI team performance. Meanwhile, due to the ``black-box'' nature of AI models, providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice. In this paper, we explore whether we can quantitatively model how humans integrate both AI recommendations and explanations into their decision process, and whether this quantitative understanding of human behavior from the learned model can be utilized to manipulate AI explanations, thereby nudging individuals towards making targeted decisions. Our extensive human experiments across various tasks demonstrate that human behavior can be easily influenced by these manipulated explanations towards targeted outcomes, regardless of the intent being adversarial or benign. Furthermore, individuals often fail to detect any anomalies in these explanations, despite their decisions being affected by them.
Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary
[ "Zhuoyan Li", "Ming Yin" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7X5zu6GIuW
@inproceedings{ kim2024dos, title={Do's and Don'ts: Learning Desirable Skills with Instruction Videos}, author={Hyunseung Kim and Byungkun Lee and Hojoon Lee and Dongyoon Hwang and Donghu Kim and Jaegul Choo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7X5zu6GIuW} }
Unsupervised skill discovery is a learning paradigm that aims to acquire diverse behaviors without explicit rewards. However, it faces challenges in learning complex behaviors and often leads to learning unsafe or undesirable behaviors. For instance, in various continuous control tasks, current unsupervised skill discovery methods succeed in learning basic locomotions like standing but struggle with learning more complex movements such as walking and running. Moreover, they may acquire unsafe behaviors like tripping and rolling or navigate to undesirable locations such as pitfalls or hazardous areas. In response, we present **DoDont** (Do’s and Dont’s), an instruction-based skill discovery algorithm composed of two stages. First, in instruction learning stage, DoDont leverages action-free instruction videos to train an instruction network to distinguish desirable transitions from undesirable ones. Then, in the skill learning stage, the instruction network adjusts the reward function of the skill discovery algorithm to weight the desired behaviors. Specifically, we integrate the instruction network into a distance-maximizing skill discovery algorithm, where the instruction network serves as the distance function. Empirically, with less than 8 instruction videos, DoDont effectively learns desirable behaviors and avoids undesirable ones across complex continuous control tasks. Code and videos are available at https://mynsng.github.io/dodont/
Do's and Don'ts: Learning Desirable Skills with Instruction Videos
[ "Hyunseung Kim", "Byungkun Lee", "Hojoon Lee", "Dongyoon Hwang", "Donghu Kim", "Jaegul Choo" ]
NeurIPS.cc/2024/Conference
2406.00324
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7WvwzuYkUq
@inproceedings{ kassraie2024progressive, title={Progressive Entropic Optimal Transport Solvers}, author={Parnian Kassraie and Aram-Alexandre Pooladian and Michal Klein and James Thornton and Jonathan Niles-Weed and marco cuturi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7WvwzuYkUq} }
Optimal transport (OT) has profoundly impacted machine learning by providing theoretical and computational tools to realign datasets. In this context, given two large point clouds of sizes $n$ and $m$ in $\mathbb{R}^d$, entropic OT (EOT) solvers have emerged as the most reliable tool to either solve the Kantorovich problem and output a $n\times m$ coupling matrix, or to solve the Monge problem and learn a vector-valued push-forward map. While the robustness of EOT couplings/maps makes them a go-to choice in practical applications, EOT solvers remain difficult to tune because of a small but influential set of hyperparameters, notably the omnipresent entropic regularization strength $\varepsilon$. Setting $\varepsilon$ can be difficult, as it simultaneously impacts various performance metrics, such as compute speed, statistical performance, generalization, and bias. In this work, we propose a new class of EOT solvers (ProgOT), that can estimate both plans and transport maps. We take advantage of several opportunities to optimize the computation of EOT solutions by *dividing* mass displacement using a time discretization, borrowing inspiration from dynamic OT formulations, and *conquering* each of these steps using EOT with properly scheduled parameters. We provide experimental evidence demonstrating that ProgOT is a faster and more robust alternative to *standard solvers* when computing couplings at large scales, even outperforming neural network-based approaches. We also prove statistical consistency of our approach for estimating OT maps.
Progressive Entropic Optimal Transport Solvers
[ "Parnian Kassraie", "Aram-Alexandre Pooladian", "Michal Klein", "James Thornton", "Jonathan Niles-Weed", "marco cuturi" ]
NeurIPS.cc/2024/Conference
2406.05061
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7WoOphIZ8u
@inproceedings{ iutzeler2024derivatives, title={Derivatives of Stochastic Gradient Descent in parametric optimization}, author={Franck Iutzeler and Edouard Pauwels and Samuel Vaiter}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7WoOphIZ8u} }
We consider stochastic optimization problems where the objective depends on some parameter, as commonly found in hyperparameter optimization for instance. We investigate the behavior of the derivatives of the iterates of Stochastic Gradient Descent (SGD) with respect to that parameter and show that they are driven by an inexact SGD recursion on a different objective function, perturbed by the convergence of the original SGD. This enables us to establish that the derivatives of SGD converge to the derivative of the solution mapping in terms of mean squared error whenever the objective is strongly convex. Specifically, we demonstrate that with constant step-sizes, these derivatives stabilize within a noise ball centered at the solution derivative, and that with vanishing step-sizes they exhibit $O(\log(k)^2 / k)$ convergence rates. Additionally, we prove exponential convergence in the interpolation regime. Our theoretical findings are illustrated by numerical experiments on synthetic tasks.
Derivatives of Stochastic Gradient Descent in parametric optimization
[ "Franck Iutzeler", "Edouard Pauwels", "Samuel Vaiter" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7W0f7lifDk
@inproceedings{ xue2024humandiffusion, title={Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models}, author={Yuxuan Xue and Xianghui Xie and Riccardo Marin and Gerard Pons-Moll}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7W0f7lifDk} }
Creating realistic avatars from a single RGB image is an attractive yet challenging problem. To deal with challenging loose clothing or occlusion by interaction objects, we leverage powerful shape prior from 2D diffusion models pretrained on large datasets. Although 2D diffusion models demonstrate strong generalization capability, they cannot provide multi-view shape priors with guaranteed 3D consistency. We propose Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion. Our key insight is that 2D multi-view diffusion and 3D reconstruction models provide complementary information for each other. By coupling them in a tight manner, we can fully leverage the potential of both models. We introduce a novel image-conditioned generative 3D Gaussian Splats reconstruction model that leverages the prior from 2D multi-view diffusion models, and provides an explicit 3D representation, which further guides the 2D reverse sampling process to have better 3D consistency. Experiments show that our proposed framework outperforms state-of-the-art methods and enables the creation of realistic avatars from a single RGB image, achieving high-fidelity in both geometry and appearance. Extensive ablations also validate the efficacy of our design, (1) multi-view 2D priors conditioning in generative 3D reconstruction and (2) consistency refinement of sampling trajectory via the explicit 3D representation. Our code and models are released at https://yuxuan-xue.com/human-3diffusion/.
Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models
[ "Yuxuan Xue", "Xianghui Xie", "Riccardo Marin", "Gerard Pons-Moll" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7V62sQ5Jra
@inproceedings{ chatzi2024predictionpowered, title={Prediction-Powered Ranking of Large Language Models}, author={Ivi Chatzi and Eleni Straitouri and Suhas Thejaswi and Manuel Gomez Rodriguez}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7V62sQ5Jra} }
Large language models are often ranked according to their level of alignment with human preferences---a model is better than other models if its outputs are more frequently preferred by humans. One of the popular ways to elicit human preferences utilizes pairwise comparisons between the outputs provided by different models to the same inputs. However, since gathering pairwise comparisons by humans is costly and time-consuming, it has become a common practice to gather pairwise comparisons by a strong large language model---a model strongly aligned with human preferences. Surprisingly, practitioners cannot currently measure the uncertainty that any mismatch between human and model preferences may introduce in the constructed rankings. In this work, we develop a statistical framework to bridge this gap. Given a (small) set of pairwise comparisons by humans and a large set of pairwise comparisons by a model, our framework provides a rank-set---a set of possible ranking positions---for each of the models under comparison. Moreover, it guarantees that, with a probability greater than or equal to a user-specified value, the rank-sets cover the true ranking consistent with the distribution of human pairwise preferences asymptotically. Using pairwise comparisons made by humans in the LMSYS Chatbot Arena platform and pairwise comparisons made by three strong large language models, we empirically demonstrate the effectivity of our framework and show that the rank-sets constructed using only pairwise comparisons by the strong large language models are often inconsistent with (the distribution of) human pairwise preferences.
Prediction-Powered Ranking of Large Language Models
[ "Ivi Chatzi", "Eleni Straitouri", "Suhas Thejaswi", "Manuel Gomez Rodriguez" ]
NeurIPS.cc/2024/Conference
2402.17826
[ "https://github.com/networks-learning/prediction-powered-ranking" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7UyBKTFrtd
@inproceedings{ bhalla2024interpreting, title={Interpreting {CLIP} with Sparse Linear Concept Embeddings (SpLi{CE})}, author={Usha Bhalla and Alex Oesterling and Suraj Srinivas and Flavio Calmon and Himabindu Lakkaraju}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7UyBKTFrtd} }
CLIP embeddings have demonstrated remarkable performance across a wide range of multimodal applications. However, these high-dimensional, dense vector representations are not easily interpretable, limiting our understanding of the rich structure of CLIP and its use in downstream applications that require transparency. In this work, we show that the semantic structure of CLIP's latent space can be leveraged to provide interpretability, allowing for the decomposition of representations into semantic concepts. We formulate this problem as one of sparse recovery and propose a novel method, Sparse Linear Concept Embeddings (SpLiCE), for transforming CLIP representations into sparse linear combinations of human-interpretable concepts. Distinct from previous work, \method is task-agnostic and can be used, without training, to explain and even replace traditional dense CLIP representations, maintaining high downstream performance while significantly improving their interpretability. We also demonstrate significant use cases of \method representations including detecting spurious correlations and model editing. Code is provided at https://github.com/AI4LIFE-GROUP/SpLiCE.
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
[ "Usha Bhalla", "Alex Oesterling", "Suraj Srinivas", "Flavio Calmon", "Himabindu Lakkaraju" ]
NeurIPS.cc/2024/Conference
2402.10376
[ "https://github.com/ai4life-group/splice" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7UenF4kx4j
@inproceedings{ yu2024smart, title={{SMART}: Towards Pre-trained Missing-Aware Model for Patient Health Status Prediction}, author={Zhihao Yu and Xu Chu and Yujie Jin and Yasha Wang and Junfeng Zhao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7UenF4kx4j} }
Electronic health record (EHR) data has emerged as a valuable resource for analyzing patient health status. However, the prevalence of missing data in EHR poses significant challenges to existing methods, leading to spurious correlations and suboptimal predictions. While various imputation techniques have been developed to address this issue, they often obsess difficult-to-interpolate details and may introduce additional noise when making clinical predictions. To tackle this problem, we propose SMART, a Self-Supervised Missing-Aware RepresenTation Learning approach for patient health status prediction, which encodes missing information via missing-aware temporal and variable attentions and learns to impute missing values through a novel self-supervised pre-training approach which reconstructs missing data representations in the latent space rather than in input space as usual. By adopting elaborated attentions and focusing on learning higher-order representations, SMART promotes better generalization and robustness to missing data. We validate the effectiveness of SMART through extensive experiments on six EHR tasks, demonstrating its superiority over state-of-the-art methods.
SMART: Towards Pre-trained Missing-Aware Model for Patient Health Status Prediction
[ "Zhihao Yu", "Xu Chu", "Yujie Jin", "Yasha Wang", "Junfeng Zhao" ]
NeurIPS.cc/2024/Conference
2405.09039
[ "https://github.com/yzhHoward/SMART" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7U5MwUS3Rw
@inproceedings{ wang2024towards, title={Towards Harmless Rawlsian Fairness Regardless of Demographic Prior}, author={Xuanqian Wang and Jing Li and Ivor Tsang and Yew-Soon Ong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7U5MwUS3Rw} }
Due to privacy and security concerns, recent advancements in group fairness advocate for model training regardless of demographic information. However, most methods still require prior knowledge of demographics. In this study, we explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set, namely _harmless Rawlsian fairness_. We ascertain that such a fairness requirement with no prior demographic information essential promotes training losses to exhibit a Dirac delta distribution. To this end, we propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses. This problem is then optimized by a tailored dynamic update approach that operates in both loss and gradient dimensions, directing the model towards relatively fairer solutions while preserving its intact utility. Our experimental findings indicate that regression tasks, which are relatively unexplored from literature, can achieve significant fairness improvement through VFair regardless of any prior, whereas classification tasks usually do not because of their quantized utility measurements. The implementation of our method is publicly available at https://github.com/wxqpxw/VFair.
Towards Harmless Rawlsian Fairness Regardless of Demographic Prior
[ "Xuanqian Wang", "Jing Li", "Ivor Tsang", "Yew-Soon Ong" ]
NeurIPS.cc/2024/Conference
2411.02467
[ "https://github.com/wxqpxw/vfair" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Tir0u0ukg
@inproceedings{ hsu2024randomized, title={Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning}, author={Hao-Lun Hsu and Weixin Wang and Miroslav Pajic and Pan Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Tir0u0ukg} }
We present the first study on provably efficient randomized exploration in cooperative multi-agent reinforcement learning (MARL). We propose a unified algorithm framework for randomized exploration in parallel Markov Decision Processes (MDPs), and two Thompson Sampling (TS)-type algorithms, CoopTS-PHE and CoopTS-LMC, incorporating the perturbed-history exploration (PHE) strategy and the Langevin Monte Carlo exploration (LMC) strategy respectively, which are flexible in design and easy to implement in practice. For a special class of parallel MDPs where the transition is (approximately) linear, we theoretically prove that both CoopTS-PHE and CoopTS-LMC achieve a $\widetilde{\mathcal{O}}(d^{3/2}H^2\sqrt{MK})$ regret bound with communication complexity $\widetilde{\mathcal{O}}(dHM^2)$, where $d$ is the feature dimension, $H$ is the horizon length, $M$ is the number of agents, and $K$ is the number of episodes. This is the first theoretical result for randomized exploration in cooperative MARL. We evaluate our proposed method on multiple parallel RL environments, including a deep exploration problem (i.e., $N$-chain), a video game, and a real-world problem in energy systems. Our experimental results support that our framework can achieve better performance, even under conditions of misspecified transition models. Additionally, we establish a connection between our unified framework and the practical application of federated learning.
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning
[ "Hao-Lun Hsu", "Weixin Wang", "Miroslav Pajic", "Pan Xu" ]
NeurIPS.cc/2024/Conference
2404.10728
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Swrtm9Qsp
@inproceedings{ qiao2024stable, title={Stable Minima Cannot Overfit in Univariate Re{LU} Networks: Generalization by Large Step Sizes}, author={Dan Qiao and Kaiqi Zhang and Esha Singh and Daniel Soudry and Yu-Xiang Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Swrtm9Qsp} }
We study the generalization of two-layer ReLU neural networks in a univariate nonparametric regression problem with noisy labels. This is a problem where kernels (\emph{e.g.} NTK) are provably sub-optimal and benign overfitting does not happen, thus disqualifying existing theory for interpolating (0-loss, global optimal) solutions. We present a new theory of generalization for local minima that gradient descent with a constant learning rate can \emph{stably} converge to. We show that gradient descent with a fixed learning rate $\eta$ can only find local minima that represent smooth functions with a certain weighted \emph{first order total variation} bounded by $1/\eta - 1/2 + \widetilde{O}(\sigma + \sqrt{\mathrm{MSE}})$ where $\sigma$ is the label noise level, $\mathrm{MSE}$ is short for mean squared error against the ground truth, and $\widetilde{O}(\cdot)$ hides a logarithmic factor. Under mild assumptions, we also prove a nearly-optimal MSE bound of $\widetilde{O}(n^{-4/5})$ within the strict interior of the support of the $n$ data points. Our theoretical results are validated by extensive simulation that demonstrates large learning rate training induces sparse linear spline fits. To the best of our knowledge, we are the first to obtain generalization bound via minima stability in the non-interpolation case and the first to show ReLU NNs without regularization can achieve near-optimal rates in nonparametric regression.
Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes
[ "Dan Qiao", "Kaiqi Zhang", "Esha Singh", "Daniel Soudry", "Yu-Xiang Wang" ]
NeurIPS.cc/2024/Conference
2406.06838
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=7Sh0XkN1KS
@inproceedings{ medvedev2024overfitting, title={Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality}, author={Marko Medvedev and Gal Vardi and Nathan Srebro}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Sh0XkN1KS} }
We consider the overfitting behavior of minimum norm interpolating solutions of Gaussian kernel ridge regression (i.e. kernel ridgeless regression), when the bandwidth or input dimension varies with the sample size. For fixed dimensions, we show that even with varying or tuned bandwidth, the ridgeless solution is never consistent and, at least with large enough noise, always worse than the null predictor. For increasing dimension, we give a generic characterization of the overfitting behavior for any scaling of the dimension with sample size. We use this to provide the first example of benign overfitting using the Gaussian kernel with sub-polynomial scaling dimension. All our results are under the Gaussian universality ansatz and the (non-rigorous) risk predictions in terms of the kernel eigenstructure.
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
[ "Marko Medvedev", "Gal Vardi", "Nathan Srebro" ]
NeurIPS.cc/2024/Conference
2409.03891
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7RwKMRMNrc
@inproceedings{ moutakanni2024you, title={You Don{\textquoteright}t Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning}, author={Th{\'e}o Moutakanni and Maxime Oquab and Marc Szafraniec and Maria Vakalopoulou and Piotr Bojanowski}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7RwKMRMNrc} }
Self-Supervised learning (SSL) with Joint-Embedding Architectures (JEA) has led to outstanding performances. All instantiations of this paradigm were trained using strong and well-established hand-crafted data augmentations, leading to the general belief that they are required for the proper training and performance of such models. On the other hand, generative reconstruction-based models such as BEIT and MAE or Joint-Embedding Predictive Architectures such as I-JEPA have shown strong performance without using data augmentations except masking. In this work, we challenge the importance of invariance and data-augmentation in JEAs at scale. By running a case-study on a recent SSL foundation model -- DINOv2 -- we show that strong image representations can be obtained with JEAs and only cropping without resizing provided the training data is large enough, reaching state-of-the-art results and using the least amount of augmentation in the literature. Through this study, we also discuss the impact of compute constraints on the outcomes of experimental deep learning research, showing that they can lead to very different conclusions.
You Don’t Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning
[ "Théo Moutakanni", "Maxime Oquab", "Marc Szafraniec", "Maria Vakalopoulou", "Piotr Bojanowski" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7RQvjayHrM
@inproceedings{ chen2024routerdc, title={Router{DC}: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models}, author={Shuhao Chen and Weisen Jiang and Baijiong Lin and James Kwok and Yu Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7RQvjayHrM} }
Recent works show that assembling multiple off-the-shelf large language models (LLMs) can harness their complementary abilities. To achieve this, routing is a promising method, which learns a router to select the most suitable LLM for each query. However, existing routing models are ineffective when multiple LLMs perform well for a query. To address this problem, in this paper, we propose a method called query-based Router by Dual Contrastive learning (RouterDC). The RouterDC model, which consists of an encoder and LLM embeddings, is trained by two proposed contrastive losses (sample-LLM and sample-sample losses). Experimental results show that RouterDC is effective in assembling LLMs and largely outperforms individual top-performing LLMs as well as existing routing methods on both in-distribution (+2.76\%) and out-of-distribution (+1.90\%) tasks. The source code is available at https://github.com/shuhao02/RouterDC.
RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models
[ "Shuhao Chen", "Weisen Jiang", "Baijiong Lin", "James Kwok", "Yu Zhang" ]
NeurIPS.cc/2024/Conference
2409.19886
[ "https://github.com/shuhao02/routerdc" ]
https://huggingface.co/papers/2409.19886
0
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=7QG9R8urVy
@inproceedings{ mao2024doubly, title={Doubly Mild Generalization for Offline Reinforcement Learning}, author={Yixiu Mao and Cheems Wang and Yun Qu and Yuhang Jiang and Xiangyang Ji}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7QG9R8urVy} }
Offline Reinforcement Learning (RL) suffers from the extrapolation error and value overestimation. From a generalization perspective, this issue can be attributed to the over-generalization of value functions or policies towards out-of-distribution (OOD) actions. Significant efforts have been devoted to mitigating such generalization, and recent in-sample learning approaches have further succeeded in entirely eschewing it. Nevertheless, we show that mild generalization beyond the dataset can be trusted and leveraged to improve performance under certain conditions. To appropriately exploit generalization in offline RL, we propose Doubly Mild Generalization (DMG), comprising (i) mild action generalization and (ii) mild generalization propagation. The former refers to selecting actions in a close neighborhood of the dataset to maximize the Q values. Even so, the potential erroneous generalization can still be propagated, accumulated, and exacerbated by bootstrapping. In light of this, the latter concept is introduced to mitigate the generalization propagation without impeding the propagation of RL learning signals. Theoretically, DMG guarantees better performance than the in-sample optimal policy in the oracle generalization scenario. Even under worst-case generalization, DMG can still control value overestimation at a certain level and lower bound the performance. Empirically, DMG achieves state-of-the-art performance across Gym-MuJoCo locomotion tasks and challenging AntMaze tasks. Moreover, benefiting from its flexibility in both generalization aspects, DMG enjoys a seamless transition from offline to online learning and attains strong online fine-tuning performance.
Doubly Mild Generalization for Offline Reinforcement Learning
[ "Yixiu Mao", "Cheems Wang", "Yun Qu", "Yuhang Jiang", "Xiangyang Ji" ]
NeurIPS.cc/2024/Conference
2411.07934
[ "https://github.com/maoyixiu/dmg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7PORYhql4V
@inproceedings{ wang2024great, title={Great Minds Think Alike: The Universal Convergence Trend of Input Salience}, author={Yipei Wang and Jeffrey Mark Siskind and Xiaoqian Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7PORYhql4V} }
Uncertainty is introduced in optimized DNNs through stochastic algorithms, forming specific distributions. Training models can be seen as random sampling from this distribution of optimized models. In this work, we study the distribution of optimized DNNs as a family of functions by leveraging a pointwise approach. We focus on the input saliency maps, as the input gradient field is decisive to the models' mathematical essence. Our investigation of saliency maps reveals a counter-intuitive trend: two stochastically optimized models tend to resemble each other more as either of their capacities increases. Therefore, we hypothesize several properties of these distributions, suggesting that (1) Within the same model architecture (e.g., CNNs, ResNets), different family variants (e.g., varying capacities) tend to align in terms of their population mean directions of the input salience. And (2) the distributions of optimized models follow a convergence trend to their shared population mean as the capacity increases. Furthermore, we also propose semi-parametric distributions based on the Saw distribution to model the convergence trend, satisfying all the counter-intuitive observations. Our experiments shed light on the significant implications of our hypotheses in various application domains, including black-box attacks, deep ensembles, etc. These findings not only enhance our understanding of DNN behaviors but also offer valuable insights for their practical application in diverse areas of deep learning.
Great Minds Think Alike: The Universal Convergence Trend of Input Salience
[ "Yipei Wang", "Jeffrey Mark Siskind", "Xiaoqian Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7O6KtaAr8n
@inproceedings{ pardeshi2024learning, title={Learning Social Welfare Functions}, author={Kanad Shrikar Pardeshi and Itai Shapira and Ariel D. Procaccia and Aarti Singh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7O6KtaAr8n} }
Is it possible to understand or imitate a policy maker's rationale by looking at past decisions they made? We formalize this question as the problem of learning social welfare functions belonging to the well-studied family of power mean functions. We focus on two learning tasks; in the first, the input is vectors of utilities of an action (decision or policy) for individuals in a group and their associated social welfare as judged by a policy maker, whereas in the second, the input is pairwise comparisons between the welfares associated with a given pair of utility vectors. We show that power mean functions are learnable with polynomial sample complexity in both cases, even if the social welfare information is noisy. Finally, we design practical algorithms for these tasks and evaluate their performance.
Learning Social Welfare Functions
[ "Kanad Shrikar Pardeshi", "Itai Shapira", "Ariel D. Procaccia", "Aarti Singh" ]
NeurIPS.cc/2024/Conference
2405.17700
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=7Ntft3U7jj
@inproceedings{ wang2024uncovering, title={Uncovering the Redundancy in Graph Self-supervised Learning Models}, author={Zhibiao Wang and Xiao Wang and Haoyue Deng and Nian Liu and Shirui Pan and Chunming Hu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Ntft3U7jj} }
Graph self-supervised learning, as a powerful pre-training paradigm for Graph Neural Networks (GNNs) without labels, has received considerable attention. We have witnessed the success of graph self-supervised learning on pre-training the parameters of GNNs, leading many not to doubt that whether the learned GNNs parameters are all useful. In this paper, by presenting the experimental evidence and analysis, we surprisingly discover that the graph self-supervised learning models are highly redundant at both of neuron and layer levels, e.g., even randomly removing 51.6\% of parameters, the performance of graph self-supervised learning models still retains at least 96.2\%. This discovery implies that the parameters of graph self-supervised models can be largely reduced, making simultaneously fine-tuning both graph self-supervised learning models and prediction layers more feasible. Therefore, we further design a novel graph pre-training and fine-tuning paradigm called SLImming DE-correlation Fine-tuning (SLIDE). The effectiveness of SLIDE is verified through extensive experiments on various benchmarks, and the performance can be even improved with fewer parameters of models in most cases. For example, in comparison with full fine-tuning GraphMAE on Amazon-Computers dataset, even randomly reducing 40\% of parameters, we can still achieve the improvement of 0.24\% and 0.27\% for Micro-F1 and Macro-F1 scores respectively.
Uncovering the Redundancy in Graph Self-supervised Learning Models
[ "Zhibiao Wang", "Xiao Wang", "Haoyue Deng", "Nian Liu", "Shirui Pan", "Chunming Hu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7NrYnCN2be
@inproceedings{ qu2024boosting, title={Boosting Semi-Supervised Scene Text Recognition via Viewing and Summarizing}, author={Yadong Qu and Yuxin Wang and Bangbang Zhou and Zixiao Wang and Hongtao Xie and Yongdong Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7NrYnCN2be} }
Existing scene text recognition (STR) methods struggle to recognize challenging texts, especially for artistic and severely distorted characters. The limitation lies in the insufficient exploration of character morphologies, including the monotonousness of widely used synthetic training data and the sensitivity of the model to character morphologies. To address these issues, inspired by the human learning process of viewing and summarizing, we facilitate the contrastive learning-based STR framework in a self-motivated manner by leveraging synthetic and real unlabeled data without any human cost. In the viewing process, to compensate for the simplicity of synthetic data and enrich character morphology diversity, we propose an Online Generation Strategy to generate background-free samples with diverse character styles. By excluding background noise distractions, the model is encouraged to focus on character morphology and generalize the ability to recognize complex samples when trained with only simple synthetic data. To boost the summarizing process, we theoretically demonstrate the derivation error in the previous character contrastive loss, which mistakenly causes the sparsity in the intra-class distribution and exacerbates ambiguity on challenging samples. Therefore, a new Character Unidirectional Alignment Loss is proposed to correct this error and unify the representation of the same characters in all samples by aligning the character features in the student model with the reference features in the teacher model. Extensive experiment results show that our method achieves SOTA performance (94.7\% and 70.9\% average accuracy on common benchmarks and Union14M-Benchmark). Code will be available.
Boosting Semi-Supervised Scene Text Recognition via Viewing and Summarizing
[ "Yadong Qu", "Yuxin Wang", "Bangbang Zhou", "Zixiao Wang", "Hongtao Xie", "Yongdong Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Mo1NOosNT
@inproceedings{ joshi2024cold, title={{COLD}: Causal reasOning in cLosed Daily activities}, author={Abhinav Joshi and Areeb Ahmad and Ashutosh Modi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Mo1NOosNT} }
Large Language Models (LLMs) have shown state-of-the-art performance in a variety of tasks, including arithmetic and reasoning; however, to gauge the intellectual capabilities of LLMs, causal reasoning has become a reliable proxy for validating a general understanding of the mechanics and intricacies of the world similar to humans. Previous works in natural language processing (NLP) have either focused on open-ended causal reasoning via causal commonsense reasoning (CCR) or framed a symbolic representation-based question answering for theoretically backed-up analysis via a causal inference engine. The former adds an advantage of real-world grounding but lacks theoretically backed-up analysis/validation, whereas the latter is far from real-world grounding. In this work, we bridge this gap by proposing the COLD (Causal reasOning in cLosed Daily activities) framework, which is built upon human understanding of daily real-world activities to reason about the causal nature of events. We show that the proposed framework facilitates the creation of enormous causal queries (∼ 9 million) and comes close to the mini-turing test, simulating causal reasoning to evaluate the understanding of a daily real-world task. We evaluate multiple LLMs on the created causal queries and find that causal reasoning is challenging even for activities trivial to humans. We further explore (the causal reasoning abilities of LLMs) using the backdoor criterion to determine the causal strength between events.
COLD: Causal reasOning in cLosed Daily activities
[ "Abhinav Joshi", "Areeb Ahmad", "Ashutosh Modi" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Lv8zHQWwS
@inproceedings{ zou2024a, title={A Boosting-Type Convergence Result for AdaBoost.{MH} with Factorized Multi-Class Classifiers}, author={Xin Zou and Zhengyu Zhou and Jingyuan Xu and Weiwei Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Lv8zHQWwS} }
AdaBoost is a well-known algorithm in boosting. Schapire and Singer propose, an extension of AdaBoost, named AdaBoost.MH, for multi-class classification problems. Kégl shows empirically that AdaBoost.MH works better when the classical one-against-all base classifiers are replaced by factorized base classifiers containing a binary classifier and a vote (or code) vector. However, the factorization makes it much more difficult to provide a convergence result for the factorized version of AdaBoost.MH. Then, Kégl raises an open problem in COLT 2014 to look for a convergence result for the factorized AdaBoost.MH. In this work, we resolve this open problem by presenting a convergence result for AdaBoost.MH with factorized multi-class classifiers.
A Boosting-Type Convergence Result for AdaBoost.MH with Factorized Multi-Class Classifiers
[ "Xin Zou", "Zhengyu Zhou", "Jingyuan Xu", "Weiwei Liu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7LIm53Jiic
@inproceedings{ yu2024error, title={Error Correction Output Codes for Robust Neural Networks against Weight-errors: A Neural Tangent Kernel Point of View}, author={Anlan Yu and Shusen Jing and Ning Lyu and Wujie Wen and Zhiyuan Yan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7LIm53Jiic} }
Error correcting output code (ECOC) is a classic method that encodes binary classifiers to tackle the multi-class classification problem in decision trees and neural networks. Among ECOCs, the one-hot code has become the default choice in modern deep neural networks (DNNs) due to its simplicity in decision making. However, it suffers from a significant limitation in its ability to achieve high robust accuracy, particularly in the presence of weight errors. While recent studies have experimentally demonstrated that the non-one-hot ECOCs with multi-bits error correction ability, could be a better solution, there is a notable absence of theoretical foundations that can elucidate the relationship between codeword design, weight-error magnitude, and network characteristics, so as to provide robustness guarantees. This work is positioned to bridge this gap through the lens of neural tangent kernel (NTK). We have two important theoretical findings: 1) In clean models (without weight errors), utilizing one-hot code and non-one-hot ECOC is akin to altering decoding metrics from $l_2$ distance to Mahalanobis distance. 2) In non-clean models (with weight errors), if the normalized distance exceeds a threshold, then non-clean DNNs can reach the clean model's accuracy as long as the code length approaches infinity. This threshold is determined by DNN architecture (e.g. layer number, activation), weight error magnitude, and the distance between the output and the nearest codeword. Based on these findings, we further demonstrate how to practically use them to identify optimal ECOCs for simple tasks (short-code ECOCs) and complex tasks (long-code ECOCs), by balancing the code orthogonality (as per finding 1) and code distance (as per finding 2). Extensive experimental results across four datasets and four DNN models validate the superior performance of constructed codes, guided by our findings, compared to existing ECOCs. To our best knowledge, this is the first work that provides theoretical explanations for the effectiveness of ECOCS and offers associated design guidance for optimal ECOCs specifically tailored to DNNs.
Error Correction Output Codes for Robust Neural Networks against Weight-errors: A Neural Tangent Kernel Point of View
[ "Anlan Yu", "Shusen Jing", "Ning Lyu", "Wujie Wen", "Zhiyuan Yan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7L2tCirpwB
@inproceedings{ li2024error, title={Error Analysis of Spherically Constrained Least Squares Reformulation in Solving the Stackelberg Prediction Game}, author={Xiyuan Li and Weiwei Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7L2tCirpwB} }
The Stackelberg prediction game (SPG) is a popular model for characterizing strategic interactions between a learner and an adversarial data provider. Although optimization problems in SPGs are often NP-hard, a notable special case involving the least squares loss (SPG-LS) has gained significant research attention recently, (Bishop et al. 2020; Wang et al. 2021; Wang et al. 2022). The latest state-of-the-art method for solving the SPG-LS problem is the spherically constrained least squares reformulation (SCLS) method proposed in the work of Wang et al. (2022). However, the lack of theoretical analysis on the error of the SCLS method limits its large-scale applications. In this paper, we investigate the estimation error between the learner obtained by the SCLS method and the actual learner. Specifically, we reframe the estimation error of the SCLS method as a Primary Optimization ($\textbf{PO}$) problem and utilize the Convex Gaussian min-max theorem (CGMT) to transform the $\textbf{PO}$ problem into an Auxiliary Optimization ($\textbf{AO}$) problem. Subsequently, we provide a theoretical error analysis for the SCLS method based on this simplified $\textbf{AO}$ problem. This analysis not only strengthens the theoretical framework of the SCLS method but also confirms the reliability of the learner produced by it. We further conduct experiments to validate our theorems, and the results are in excellent agreement with our theoretical predictions.
Error Analysis of Spherically Constrained Least Squares Reformulation in Solving the Stackelberg Prediction Game
[ "Xiyuan Li", "Weiwei Liu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Kz7icCZ6H
@inproceedings{ somepalli2024calvin, title={{CALVIN}: Improved Contextual Video Captioning via Instruction Tuning}, author={Gowthami Somepalli and Arkabandhu Chowdhury and Jonas Geiping and Ronen Basri and Tom Goldstein and David W. Jacobs}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Kz7icCZ6H} }
The recent emergence of powerful Vision-Language models (VLMs) has significantly improved image captioning. Some of these models are extended to caption videos as well. However, their capabilities to understand complex scenes are limited, and the descriptions they provide for scenes tend to be overly verbose and focused on the superficial appearance of objects. Scene descriptions, especially in movies, require a deeper contextual understanding, unlike general-purpose video captioning. To address this challenge, we propose a model, CALVIN, a specialized video LLM that leverages previous movie context to generate fully "contextual" scene descriptions. To achieve this, we train our model on a suite of tasks that integrate both image-based question-answering and video captioning within a unified framework, before applying instruction tuning to refine the model's ability to provide scene captions. Lastly, we observe that our model responds well to prompt engineering and few-shot in-context learning techniques, enabling the user to adapt it to any new movie with very little additional annotation.
CALVIN: Improved Contextual Video Captioning via Instruction Tuning
[ "Gowthami Somepalli", "Arkabandhu Chowdhury", "Jonas Geiping", "Ronen Basri", "Tom Goldstein", "David W. Jacobs" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Jb4NJS8Yk
@inproceedings{ guan2024richelieu, title={Richelieu: Self-Evolving {LLM}-Based Agents for {AI} Diplomacy}, author={Zhenyu Guan and Xiangyu Kong and Fangwei Zhong and Yizhou Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Jb4NJS8Yk} }
Diplomacy is one of the most sophisticated activities in human society, involving complex interactions among multiple parties that require skills in social reasoning, negotiation, and long-term strategic planning. Previous AI agents have demonstrated their ability to handle multi-step games and large action spaces in multi-agent tasks. However, diplomacy involves a staggering magnitude of decision spaces, especially considering the negotiation stage required. While recent agents based on large language models (LLMs) have shown potential in various applications, they still struggle with extended planning periods in complex multi-agent settings. Leveraging recent technologies for LLM-based agents, we aim to explore AI's potential to create a human-like agent capable of executing comprehensive multi-agent missions by integrating three fundamental capabilities: 1) strategic planning with memory and reflection; 2) goal-oriented negotiation with social reasoning; and 3) augmenting memory through self-play games for self-evolution without human in the loop.
Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy
[ "Zhenyu Guan", "Xiangyu Kong", "Fangwei Zhong", "Yizhou Wang" ]
NeurIPS.cc/2024/Conference
2407.06813
[ "https://github.com/todexter3/richelieu" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Hb03vGcJk
@inproceedings{ xu2024slotvlm, title={Slot-{VLM}: Object-Event Slots for Video-Language Modeling}, author={Jiaqi Xu and Cuiling Lan and Wenxuan Xie and Xuejin Chen and Yan Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Hb03vGcJk} }
Video-Language Models (VLMs), powered by the advancements in Large Language Models (LLMs), are charting new frontiers in video understanding. A pivotal challenge is the development of an effective method to encapsulate video content into a set of representative tokens to align with LLMs. In this work, we introduce Slot-VLM, a new framework designed to generate semantically decomposed video tokens, in terms of object-wise and event-wise visual representations, to facilitate LLM inference. Particularly, we design an Object-Event Slots module, i.e., OE-Slots, that adaptively aggregates the dense video tokens from the vision encoder to a set of representative slots. In order to take into account both the spatial object details and the varied temporal dynamics, we build OE-Slots with two branches: the Object-Slots branch and the Event-Slots branch. The Object-Slots branch focuses on extracting object-centric slots from features of high spatial resolution but low frame sample rate, emphasizing detailed object information. The Event-Slots branch is engineered to learn event-centric slots from high temporal sample rate but low spatial resolution features. These complementary slots are combined to form the vision context, serving as the input to the LLM for effective video reasoning. Our experimental results demonstrate the effectiveness of our Slot-VLM, which achieves the state-of-the-art performance on video question-answering.
Slot-VLM: Object-Event Slots for Video-Language Modeling
[ "Jiaqi Xu", "Cuiling Lan", "Wenxuan Xie", "Xuejin Chen", "Yan Lu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7HFQfRjdcn
@inproceedings{ chen2024neural, title={Neural Characteristic Activation Analysis and Geometric Parameterization for Re{LU} Networks}, author={Wenlin Chen and Hong Ge}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7HFQfRjdcn} }
We introduce a novel approach for analyzing the training dynamics of ReLU networks by examining the characteristic activation boundaries of individual ReLU neurons. Our proposed analysis reveals a critical instability in common neural network parameterizations and normalizations during stochastic optimization, which impedes fast convergence and hurts generalization performance. Addressing this, we propose Geometric Parameterization (GmP), a novel neural network parameterization technique that effectively separates the radial and angular components of weights in the hyperspherical coordinate system. We show theoretically that GmP resolves the aforementioned instability issue. We report empirical results on various models and benchmarks to verify GmP's advantages of optimization stability, convergence speed and generalization performance.
Neural Characteristic Activation Analysis and Geometric Parameterization for ReLU Networks
[ "Wenlin Chen", "Hong Ge" ]
NeurIPS.cc/2024/Conference
2305.15912
[ "https://github.com/Wenlin-Chen/geometric-parameterization" ]
https://huggingface.co/papers/2305.15912
0
0
0
2
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=7G362fgJFd
@inproceedings{ yuan2024factorized, title={Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation}, author={Xin Yuan and Michael Maire}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7G362fgJFd} }
We develop a neural network architecture which, trained in an unsupervised manner as a denoising diffusion model, simultaneously learns to both generate and segment images. Learning is driven entirely by the denoising diffusion objective, without any annotation or prior knowledge about regions during training. A computational bottleneck, built into the neural architecture, encourages the denoising network to partition an input into regions, denoise them in parallel, and combine the results. Our trained model generates both synthetic images and, by simple examination of its internal predicted partitions, semantic segmentations of those images. Without fine-tuning, we directly apply our unsupervised model to the downstream task of segmenting real images via noising and subsequently denoising them. Experiments demonstrate that our model achieves accurate unsupervised image segmentation and high-quality synthetic image generation across multiple datasets.
Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation
[ "Xin Yuan", "Michael Maire" ]
NeurIPS.cc/2024/Conference
2309.15726
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Fzx3Akdt5
@inproceedings{ racz2024harnessing, title={Harnessing Multiple Correlated Networks for Exact Community Recovery}, author={Miklos Z. Racz and Jifan Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Fzx3Akdt5} }
We study the problem of learning latent community structure from multiple correlated networks, focusing on edge-correlated stochastic block models with two balanced communities. Recent work of Gaudio, Rácz, and Sridhar (COLT 2022) determined the precise information-theoretic threshold for exact community recovery using two correlated graphs; in particular, this showcased the subtle interplay between community recovery and graph matching. Here we study the natural setting of more than two graphs. The main challenge lies in understanding how to aggregate information across several graphs when none of the pairwise latent vertex correspondences can be exactly recovered. Our main result derives the precise information-theoretic threshold for exact community recovery using any constant number of correlated graphs, answering a question of Gaudio, Rácz, and Sridhar (COLT 2022). In particular, for every $K \geq 3$ we uncover and characterize a region of the parameter space where exact community recovery is possible using $K$ correlated graphs, even though (1) this is information-theoretically impossible using any $K-1$ of them and (2) none of the latent matchings can be exactly recovered.
Harnessing Multiple Correlated Networks for Exact Community Recovery
[ "Miklos Z. Racz", "Jifan Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7FokMz6U8n
@inproceedings{ treutlein2024connecting, title={Connecting the Dots: {LLM}s can Infer and Verbalize Latent Structure from Disparate Training Data}, author={Johannes Treutlein and Dami Choi and Jan Betley and Samuel Marks and Cem Anil and Roger Baker Grosse and Owain Evans}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7FokMz6U8n} }
One way to address safety risks from large language models (LLMs) is to censor dangerous knowledge from their training data. While this removes the explicit information, implicit information can remain scattered across various training documents. Could an LLM infer the censored knowledge by piecing together these implicit hints? As a step towards answering this question, we study inductive out-of-context reasoning (OOCR), a type of generalization in which LLMs infer latent information from evidence distributed across training documents and apply it to downstream tasks without in-context learning. Using a suite of five tasks, we demonstrate that frontier LLMs can perform inductive OOCR. In one experiment we finetune an LLM on a corpus consisting only of distances between an unknown city and other known cities. Remarkably, without in-context examples or Chain of Thought, the LLM can verbalize that the unknown city is Paris and use this fact to answer downstream questions. Further experiments show that LLMs trained only on individual coin flip outcomes can verbalize whether the coin is biased, and those trained only on pairs $(x,f(x))$ can articulate a definition of $f$ and compute inverses. While OOCR succeeds in a range of cases, we also show that it is unreliable, particularly for smaller LLMs learning complex structures. Overall, the ability of LLMs to "connect the dots" without explicit in-context learning poses a potential obstacle to monitoring and controlling the knowledge acquired by LLMs.
Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data
[ "Johannes Treutlein", "Dami Choi", "Jan Betley", "Samuel Marks", "Cem Anil", "Roger Baker Grosse", "Owain Evans" ]
NeurIPS.cc/2024/Conference
2406.14546
[ "https://github.com/choidami/inductive-oocr" ]
https://huggingface.co/papers/2406.14546
1
2
0
7
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[]
[ "smarttang/blingsec" ]
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[]
[ "smarttang/blingsec" ]
1
poster
null
https://openreview.net/forum?id=7ESHFpqjNO
@inproceedings{ pettersen2024learning, title={Learning Place Cell Representations and Context-Dependent Remapping}, author={Markus Pettersen and Frederik Rogge and Mikkel Elle Lepper{\o}d}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7ESHFpqjNO} }
Hippocampal place cells are known for their spatially selective firing patterns, which has led to the suggestion that they encode an animal's location. However, place cells also respond to contextual cues, such as smell. Furthermore, they have the ability to remap, wherein the firing fields and rates of cells change in response to changes in the environment. How place cell responses emerge, and how these representations remap is not fully understood. In this work, we propose a similarity-based objective function that translates proximity in space, to proximity in representation. We show that a neural network trained to minimize the proposed objective learns place-like representations. We also show that the proposed objective is easily extended to include other sources of information, such as context information, in the same way. When trained to encode multiple contexts, networks learn distinct representations, exhibiting remapping behaviors between contexts. The proposed objective is invariant to orthogonal transformations. Such transformations of the original trained representation (e.g. rotations), therefore yield new representations distinct from the original, without explicit relearning, akin to remapping. Our findings shed new light on the formation and encoding properties of place cells, and also demonstrate an interesting case of representational reuse.
Learning Place Cell Representations and Context-Dependent Remapping
[ "Markus Pettersen", "Frederik Rogge", "Mikkel Elle Lepperød" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7EQx56YSB2
@inproceedings{ gorbunov2024group, title={Group and Shuffle: Efficient Structured Orthogonal Parametrization}, author={Mikhail Gorbunov and Kolya Yudin and Vera Soboleva and Aibek Alanov and Alexey Naumov and Maxim Rakhuba}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7EQx56YSB2} }
The increasing size of neural networks has led to a growing demand for methods of efficient finetuning. Recently, an orthogonal finetuning paradigm was introduced that uses orthogonal matrices for adapting the weights of a pretrained model. In this paper, we introduce a new class of structured matrices, which unifies and generalizes structured classes from previous works. We examine properties of this class and build a structured orthogonal parametrization upon it. We then use this parametrization to modify the orthogonal finetuning framework, improving parameter efficiency. We empirically validate our method on different domains, including adapting of text-to-image diffusion models and downstream task finetuning in language modeling. Additionally, we adapt our construction for orthogonal convolutions and conduct experiments with 1-Lipschitz neural networks.
Group and Shuffle: Efficient Structured Orthogonal Parametrization
[ "Mikhail Gorbunov", "Kolya Yudin", "Vera Soboleva", "Aibek Alanov", "Alexey Naumov", "Maxim Rakhuba" ]
NeurIPS.cc/2024/Conference
2406.10019
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Dep87TMJs
@inproceedings{ rakotomandimby2024learning, title={Learning with Fitzpatrick Losses}, author={Seta Rakotomandimby and Jean-Philippe Chancelier and Michel De Lara and Mathieu Blondel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7Dep87TMJs} }
Fenchel-Young losses are a family of convex loss functions, encompassing the squared, logistic and sparsemax losses, among others. Each Fenchel-Young loss is implicitly associated with a link function, for mapping model outputs to predictions. For instance, the logistic loss is associated with the soft argmax link function. Can we build new loss functions associated with the same link function as Fenchel-Young losses? In this paper, we introduce Fitzpatrick losses, a new family of convex loss functions based on the Fitzpatrick function. A well-known theoretical tool in maximal monotone operator theory, the Fitzpatrick function naturally leads to a refined Fenchel-Young inequality, making Fitzpatrick losses tighter than Fenchel-Young losses, while maintaining the same link function for prediction. As an example, we introduce the Fitzpatrick logistic loss and the Fitzpatrick sparsemax loss, counterparts of the logistic and the sparsemax losses. This yields two new tighter losses associated with the soft argmax and the sparse argmax, two of the most ubiquitous output layers used in machine learning. We study in details the properties of Fitzpatrick losses and in particular, we show that they can be seen as Fenchel-Young losses using a modified, target-dependent generating function. We demonstrate the effectiveness of Fitzpatrick losses for label proportion estimation.
Learning with Fitzpatrick Losses
[ "Seta Rakotomandimby", "Jean-Philippe Chancelier", "Michel De Lara", "Mathieu Blondel" ]
NeurIPS.cc/2024/Conference
2405.14574
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7CUUtpDeqN
@inproceedings{ goswami2024analytically, title={Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions}, author={Chaitanya Goswami and Amanda Merkley}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7CUUtpDeqN} }
Bivariate partial information decomposition (PID) has emerged as a promising tool for analyzing interactions in complex systems, particularly in neuroscience. PID achieves this by decomposing the information that two sources (e.g., different brain regions) have about a target (e.g., a stimulus) into unique, redundant, and synergistic terms. However, the computation of PID remains a challenging problem, often involving optimization over distributions. While several works have been proposed to compute PID terms numerically, there is a surprising dearth of work on computing PID terms analytically. The only known analytical PID result is for jointly Gaussian distributions. In this work, we present two theoretical advances that enable analytical calculation of the PID terms for numerous well-known distributions, including distributions relevant to neuroscience, such as Poisson, Cauchy, and binomial. Our first result generalizes the analytical Gaussian PID result to the much larger class of stable distributions. We also discover a theoretical link between PID and the emerging fields of data thinning and data fission. Our second result utilizes this link to derive analytical PID terms for two more classes of distributions: convolution-closed distributions and a sub-class of the exponential family. Furthermore, we provide an analytical upper bound for approximately calculating PID for convolution-closed distributions, whose tightness we demonstrate in simulation.
Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions
[ "Chaitanya Goswami", "Amanda Merkley" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7CMNSqsZJt
@inproceedings{ cohen-wang2024contextcite, title={ContextCite: Attributing Model Generation to Context}, author={Benjamin Cohen-Wang and Harshay Shah and Kristian Georgiev and Aleksander Madry}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7CMNSqsZJt} }
How do language models use information provided as context when generating a response? Can we infer whether a particular generated statement is actually grounded in the context, a misinterpretation, or fabricated? To help answer these questions, we introduce the problem of *context attribution*: pinpointing the parts of the context (if any) that *led* a model to generate a particular statement. We then present ContextCite, a simple and scalable method for context attribution that can be applied on top of any existing language model. Finally, we showcase the utility of ContextCite through three applications: (1) helping verify generated statements (2) improving response quality by pruning the context and (3) detecting poisoning attacks. We provide code for ContextCite at https://github.com/MadryLab/context-cite.
ContextCite: Attributing Model Generation to Context
[ "Benjamin Cohen-Wang", "Harshay Shah", "Kristian Georgiev", "Aleksander Madry" ]
NeurIPS.cc/2024/Conference
2409.00729
[ "https://github.com/madrylab/context-cite" ]
https://huggingface.co/papers/2409.00729
3
13
3
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=7AXY27kdNH
@inproceedings{ annadani2024amortized, title={Amortized Active Causal Induction with Deep Reinforcement Learning}, author={Yashas Annadani and Panagiotis Tigas and Stefan Bauer and Adam Foster}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7AXY27kdNH} }
We present Causal Amortized Active Structure Learning (CAASL), an active intervention design policy that can select interventions that are adaptive, real-time and that does not require access to the likelihood. This policy, an amortized network based on the transformer, is trained with reinforcement learning on a simulator of the design environment, and a reward function that measures how close the true causal graph is to a causal graph posterior inferred from the gathered data. On synthetic data and a single-cell gene expression simulator, we demonstrate empirically that the data acquired through our policy results in a better estimate of the underlying causal graph than alternative strategies. Our design policy successfully achieves amortized intervention design on the distribution of the training environment while also generalizing well to distribution shifts in test-time design environments. Further, our policy also demonstrates excellent zero-shot generalization to design environments with dimensionality higher than that during training, and to intervention types that it has not been trained on.
Amortized Active Causal Induction with Deep Reinforcement Learning
[ "Yashas Annadani", "Panagiotis Tigas", "Stefan Bauer", "Adam Foster" ]
NeurIPS.cc/2024/Conference
2405.16718
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7AWMTPMZES
@inproceedings{ gu2024discrete, title={Discrete Modeling via Boundary Conditional Diffusion Processes}, author={Yuxuan Gu and Xiaocheng Feng and Lei Huang and Yingsheng Wu and Zekun Zhou and Weihong Zhong and kun Zhu and Bing Qin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7AWMTPMZES} }
We present an novel framework for efficiently and effectively extending the powerful continuous diffusion processes to discrete modeling. Previous approaches have suffered from the discrepancy between discrete data and continuous modeling. Our study reveals that the absence of guidance from discrete boundaries in learning probability contours is one of the main reasons. To address this issue, we propose a two-step forward process that first estimates the boundary as a prior distribution and then rescales the forward trajectory to construct a boundary conditional diffusion model. The reverse process is proportionally adjusted to guarantee that the learned contours yield more precise discrete data. Experimental results indicate that our approach achieves strong performance in both language modeling and discrete image generation tasks. In language modeling, our approach surpasses previous state-of-the-art continuous diffusion language models in three translation tasks and a summarization task, while also demonstrating competitive performance compared to auto-regressive transformers. Moreover, our method achieves comparable results to continuous diffusion models when using discrete ordinal pixels and establishes a new state-of-the-art for categorical image generation on the Cifar-10 dataset.
Discrete Modeling via Boundary Conditional Diffusion Processes
[ "Yuxuan Gu", "Xiaocheng Feng", "Lei Huang", "Yingsheng Wu", "Zekun Zhou", "Weihong Zhong", "kun Zhu", "Bing Qin" ]
NeurIPS.cc/2024/Conference
2410.22380
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7ANmKBfP88
@inproceedings{ liu2024right, title={Right this way: Can {VLM}s Guide Us to See More to Answer Questions?}, author={Li Liu and Diji Yang and Sijia Zhong and Kalyana Suma Sree Tholeti and Lei Ding and Yi Zhang and Leilani H. Gilpin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=7ANmKBfP88} }
In question-answering scenarios, humans can assess whether the available information is sufficient and seek additional information if necessary, rather than providing a forced answer. In contrast, Vision Language Models (VLMs) typically generate direct, one-shot responses without evaluating the sufficiency of the information. To investigate this gap, we identify a critical and challenging task in the Visual Question Answering (VQA) scenario: can VLMs indicate how to adjust an image when the visual information is insufficient to answer a question? This capability is especially valuable for assisting visually impaired individuals who often need guidance to capture images correctly. To evaluate this capability of current VLMs, we introduce a human-labeled dataset as a benchmark for this task. Additionally, we present an automated framework that generates synthetic training data by simulating ``where to know'' scenarios. Our empirical results show significant performance improvements in mainstream VLMs when fine-tuned with this synthetic data. This study demonstrates the potential to narrow the gap between information assessment and acquisition in VLMs, bringing their performance closer to humans.
Right this way: Can VLMs Guide Us to See More to Answer Questions?
[ "Li Liu", "Diji Yang", "Sijia Zhong", "Kalyana Suma Sree Tholeti", "Lei Ding", "Yi Zhang", "Leilani H. Gilpin" ]
NeurIPS.cc/2024/Conference
2411.00394
[ "https://github.com/LeoLee7/Directional_guidance" ]
https://huggingface.co/papers/2411.00394
1
0
0
7
[]
[ "LeoLee7/Directional_Guidance" ]
[]
[]
[ "LeoLee7/Directional_Guidance" ]
[]
1
poster
null
https://openreview.net/forum?id=79q206xswc
@inproceedings{ li2024is, title={Is Your Li{DAR} Placement Optimized for 3D Scene Understanding?}, author={Ye Li and Lingdong Kong and Hanjiang Hu and Xiaohao Xu and Xiaonan Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=79q206xswc} }
The reliability of driving perception systems under unprecedented conditions is crucial for practical usage. Latest advancements have prompted increasing interest in multi-LiDAR perception. However, prevailing driving datasets predominantly utilize single-LiDAR systems and collect data devoid of adverse conditions, failing to capture the complexities of real-world environments accurately. Addressing these gaps, we proposed Place3D, a full-cycle pipeline that encompasses LiDAR placement optimization, data generation, and downstream evaluations. Our framework makes three appealing contributions. 1) To identify the most effective configurations for multi-LiDAR systems, we introduce the Surrogate Metric of the Semantic Occupancy Grids (M-SOG) to evaluate LiDAR placement quality. 2) Leveraging the M-SOG metric, we propose a novel optimization strategy to refine multi-LiDAR placements. 3) Centered around the theme of multi-condition multi-LiDAR perception, we collect a 280,000-frame dataset from both clean and adverse conditions. Extensive experiments demonstrate that LiDAR placements optimized using our approach outperform various baselines. We showcase exceptional results in both LiDAR semantic segmentation and 3D object detection tasks, under diverse weather and sensor failure conditions.
Is Your LiDAR Placement Optimized for 3D Scene Understanding?
[ "Ye Li", "Lingdong Kong", "Hanjiang Hu", "Xiaohao Xu", "Xiaonan Huang" ]
NeurIPS.cc/2024/Conference
2403.17009
[ "https://github.com/ywyeli/place3d" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=79eWvkLjib
@inproceedings{ jeen2024zeroshot, title={Zero-Shot Reinforcement Learning from Low Quality Data}, author={Scott Jeen and Tom Bewley and Jonathan Cullen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=79eWvkLjib} }
Zero-shot reinforcement learning (RL) promises to provide agents that can perform _any_ task in an environment after an offline, reward-free pre-training phase. Methods leveraging successor measures and successor features have shown strong performance in this setting, but require access to large heterogenous datasets for pre-training which cannot be expected for most real problems. Here, we explore how the performance of zero-shot RL methods degrades when trained on small homogeneous datasets, and propose fixes inspired by _conservatism_, a well-established feature of performant single-task offline RL algorithms. We evaluate our proposals across various datasets, domains and tasks, and show that conservative zero-shot RL algorithms outperform their non-conservative counterparts on low quality datasets, and perform no worse on high quality datasets. Somewhat surprisingly, our proposals also outperform baselines that get to see the task during training. Our code is available via the project page https://enjeeneer.io/projects/zero-shot-rl/.
Zero-Shot Reinforcement Learning from Low Quality Data
[ "Scott Jeen", "Tom Bewley", "Jonathan Cullen" ]
NeurIPS.cc/2024/Conference
2309.15178
[ "https://github.com/enjeeneer/conservative-world-models" ]
https://huggingface.co/papers/2309.15178
0
1
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=792txRlKit
@inproceedings{ gan2024datastealing, title={DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans}, author={Yuan Gan and Jiaxu Miao and Yi Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=792txRlKit} }
Federated Learning (FL) is commonly used to collaboratively train models with privacy preservation. In this paper, we found out that the popular diffusion models have introduced a new vulnerability to FL, which brings serious privacy threats. Despite stringent data management measures, attackers can steal massive private data from local clients through multiple Trojans, which control generative behaviors with multiple triggers. We refer to the new task as ${\bf\textit{DataStealing}}$ and demonstrate that the attacker can achieve the purpose based on our proposed Combinatorial Triggers (ComboTs) in a vanilla FL system. However, advanced distance-based FL defenses are still effective in filtering the malicious update according to the distances between each local update. Hence, we propose an Adaptive Scale Critical Parameters (AdaSCP) attack to circumvent the defenses and seamlessly incorporate malicious updates into the global model. Specifically, AdaSCP evaluates the importance of parameters with the gradients in dominant timesteps of the diffusion model. Subsequently, it adaptively seeks the optimal scale factor and magnifies critical parameter updates before uploading to the server. As a result, the malicious update becomes similar to the benign update, making it difficult for distance-based defenses to identify. Extensive experiments reveal the risk of leaking thousands of images in training diffusion models with FL. Moreover, these experiments demonstrate the effectiveness of AdaSCP in defeating advanced distance-based defenses. We hope this work will attract more attention from the FL community to the critical privacy security issues of Diffusion Models. Code: https://github.com/yuangan/DataStealing.
DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans
[ "Yuan Gan", "Jiaxu Miao", "Yi Yang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=780uXnA4wN
@inproceedings{ wang2024an, title={An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations}, author={Shengbo Wang and Jose Blanchet and Peter Glynn}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=780uXnA4wN} }
Overparameterized stochastic differential equation (SDE) models have achieved remarkable success in various complex environments, such as PDE-constrained optimization, stochastic control and reinforcement learning, financial engineering, and neural SDEs. These models often feature system evolution coefficients that are parameterized by a high-dimensional vector $\theta \in \mathbb{R}^n$, aiming to optimize expectations of the SDE, such as a value function, through stochastic gradient ascent. Consequently, designing efficient gradient estimators for which the computational complexity scales well with $n$ is of significant interest. This paper introduces a novel unbiased stochastic gradient estimator—the generator gradient estimator—for which the computation time remains stable in $n$. In addition to establishing the validity of our methodology for general SDEs with jumps, we also perform numerical experiments that test our estimator in linear-quadratic control problems parameterized by high-dimensional neural networks. The results show a significant improvement in efficiency compared to the widely used pathwise differentiation method: Our estimator achieves near-constant computation times, increasingly outperforms its counterpart as $n$ increases, and does so without compromising estimation variance. These empirical findings highlight the potential of our proposed methodology for optimizing SDEs in contemporary applications.
An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations
[ "Shengbo Wang", "Jose Blanchet", "Peter Glynn" ]
NeurIPS.cc/2024/Conference
2407.10065
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=77kCJzvpOa
@inproceedings{ wang2024language, title={Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models}, author={Hui-Po Wang and Mario Fritz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=77kCJzvpOa} }
Despite the widespread use of statistical prior models in various fields, such models for neural network gradients have long been overlooked. The inherent challenge stems from their high-dimensional structures and complex interdependencies, which complicate effective modeling. In this work, we demonstrate the potential of large language models (LLMs) to act as gradient priors in a zero-shot setting. We examine the property by considering lossless gradient compression -- a critical application in distributed learning -- that depends heavily on precise probability modeling. To achieve this, we introduce LM-GC, a novel method that integrates LLMs with arithmetic coding. Our technique converts plain gradients into text-like formats, enhancing token efficiency by up to 38 times compared to their plain representations. We ensure that this data conversion maintains a close alignment with the structure of plain gradients and the symbols commonly recognized by LLMs. Our experiments indicate that LM-GC surpasses existing state-of-the-art lossless compression methods, improving compression rates by 10\% up to 21\% across various datasets and architectures. Additionally, our approach shows promising compatibility with lossy compression techniques such as quantization and sparsification. These findings highlight the significant potential of LLMs as a model for effectively handling gradients. We will release the source code upon publication.
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
[ "Hui-Po Wang", "Mario Fritz" ]
NeurIPS.cc/2024/Conference
2409.17836
[ "https://github.com/hui-po-wang/LM-GC" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=76NKidadct
@inproceedings{ nitanda2024improved, title={Improved Particle Approximation Error for Mean Field Neural Networks}, author={Atsushi Nitanda}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=76NKidadct} }
Mean-field Langevin dynamics (MFLD) minimizes an entropy-regularized nonlinear convex functional defined over the space of probability distributions. MFLD has gained attention due to its connection with noisy gradient descent for mean-field two-layer neural networks. Unlike standard Langevin dynamics, the nonlinearity of the objective functional induces particle interactions, necessitating multiple particles to approximate the dynamics in a finite-particle setting. Recent works (Chen et al., 2022; Suzuki et al., 2023b) have demonstrated the uniform-in-time propagation of chaos for MFLD, showing that the gap between the particle system and its mean-field limit uniformly shrinks over time as the number of particles increases. In this work, we improve the dependence on logarithmic Sobolev inequality (LSI) constants in their particle approximation errors, which can exponentially deteriorate with the regularization coefficient. Specifically, we establish an LSI-constant-free particle approximation error concerning the objective gap by leveraging the problem structure in risk minimization. As the application, we demonstrate improved convergence of MFLD, sampling guarantee for the mean-field stationary distribution, and uniform-in-time Wasserstein propagation of chaos in terms of particle complexity.
Improved Particle Approximation Error for Mean Field Neural Networks
[ "Atsushi Nitanda" ]
NeurIPS.cc/2024/Conference
2405.15767
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=76CZrhbMoo
@inproceedings{ ekin2024clipaway, title={{CLIPA}way: Harmonizing focused embeddings for removing objects via diffusion models}, author={Yi{\u{g}}it Ekin and Ahmet Burak Yildirim and Erdem Eren Caglar and Aykut Erdem and Erkut Erdem and Aysegul Dundar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=76CZrhbMoo} }
Advanced image editing techniques, particularly inpainting, are essential for seamlessly removing unwanted elements while preserving visual integrity. Traditional GAN-based methods have achieved notable success, but recent advancements in diffusion models have produced superior results due to their training on large-scale datasets, enabling the generation of remarkably realistic inpainted images. Despite their strengths, diffusion models often struggle with object removal tasks without explicit guidance, leading to unintended hallucinations of the removed object. To address this issue, we introduce CLIPAway, a novel approach leveraging CLIP embeddings to focus on background regions while excluding foreground elements. CLIPAway enhances inpainting accuracy and quality by identifying embeddings that prioritize the background, thus achieving seamless object removal. Unlike other methods that rely on specialized training datasets or costly manual annotations, CLIPAway provides a flexible, plug-and-play solution compatible with various diffusion-based inpainting techniques.
CLIPAway: Harmonizing focused embeddings for removing objects via diffusion models
[ "Yiğit Ekin", "Ahmet Burak Yildirim", "Erdem Eren Caglar", "Aykut Erdem", "Erkut Erdem", "Aysegul Dundar" ]
NeurIPS.cc/2024/Conference
2406.09368
[ "https://github.com/YigitEkin/CLIPAway" ]
https://huggingface.co/papers/2406.09368
0
0
0
6
[]
[]
[ "yigitekin/CLIPAway" ]
[]
[]
[ "yigitekin/CLIPAway" ]
1
poster
null
https://openreview.net/forum?id=74c9EOng9C
@inproceedings{ chen2024diffusion, title={Diffusion Policies Creating a Trust Region for Offline Reinforcement Learning}, author={Tianyu Chen and Zhendong Wang and Mingyuan Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=74c9EOng9C} }
Offline reinforcement learning (RL) leverages pre-collected datasets to train optimal policies. Diffusion Q-Learning (DQL), introducing diffusion models as a powerful and expressive policy class, significantly boosts the performance of offline RL. However, its reliance on iterative denoising sampling to generate actions slows down both training and inference. While several recent attempts have tried to accelerate diffusion-QL, the improvement in training and/or inference speed often results in degraded performance. In this paper, we introduce a dual policy approach, Diffusion Trusted Q-Learning (DTQL), which comprises a diffusion policy for pure behavior cloning and a practical one-step policy. We bridge the two polices by a newly introduced diffusion trust region loss. The diffusion policy maintains expressiveness, while the trust region loss directs the one-step policy to explore freely and seek modes within the region defined by the diffusion policy. DTQL eliminates the need for iterative denoising sampling during both training and inference, making it remarkably computationally efficient. We evaluate its effectiveness and algorithmic characteristics against popular Kullback-Leibler (KL) based distillation methods in 2D bandit scenarios and gym tasks. We then show that DTQL could not only outperform other methods on the majority of the D4RL benchmark tasks but also demonstrate efficiency in training and inference speeds. The PyTorch implementation is available at https://github.com/TianyuCodings/Diffusion_Trusted_Q_Learning.
Diffusion Policies Creating a Trust Region for Offline Reinforcement Learning
[ "Tianyu Chen", "Zhendong Wang", "Mingyuan Zhou" ]
NeurIPS.cc/2024/Conference
2405.19690
[ "https://github.com/tianyucodings/diffusion_trusted_q_learning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=74B6qX62vW
@inproceedings{ ashtiani2024sampleefficient, title={Sample-Efficient Private Learning of Mixtures of Gaussians}, author={Hassan Ashtiani and Mahbod Majid and Shyam Narayanan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=74B6qX62vW} }
We study the problem of learning mixtures of Gaussians with approximate differential privacy. We prove that roughly $kd^2 + k^{1.5} d^{1.75} + k^2 d$ samples suffice to learn a mixture of $k$ arbitrary $d$-dimensional Gaussians up to low total variation distance, with differential privacy. Our work improves over the previous best result (which required roughly $k^2 d^4$ samples) and is provably optimal when $d$ is much larger than $k^2$. Moreover, we give the first optimal bound for privately learning mixtures of $k$ univariate (i.e., $1$-dimensional) Gaussians. Importantly, we show that the sample complexity for learning mixtures of univariate Gaussians is linear in the number of components $k$, whereas the previous best sample complexity was quadratic in $k$. Our algorithms utilize various techniques, including the inverse sensitivity mechanism, sample compression for distributions, and methods for bounding volumes of sumsets.
Sample-Efficient Private Learning of Mixtures of Gaussians
[ "Hassan Ashtiani", "Mahbod Majid", "Shyam Narayanan" ]
NeurIPS.cc/2024/Conference
2411.02298
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=739jAzUXk7
@inproceedings{ jiang2024pcotta, title={{PC}o{TTA}: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding}, author={Jincen Jiang and Qianyu Zhou and Yuhang Li and Xinkui Zhao and Meili Wang and Lizhuang Ma and Jian Chang and Jian Jun Zhang and Xuequan Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=739jAzUXk7} }
In this paper, we present PCoTTA, an innovative, pioneering framework for Continual Test-Time Adaptation (CoTTA) in multi-task point cloud understanding, enhancing the model's transferability towards the continually changing target domain. We introduce a multi-task setting for PCoTTA, which is practical and realistic, handling multiple tasks within one unified model during the continual adaptation. Our PCoTTA involves three key components: automatic prototype mixture (APM), Gaussian Splatted feature shifting (GSFS), and contrastive prototype repulsion (CPR). Firstly, APM is designed to automatically mix the source prototypes with the learnable prototypes with a similarity balancing factor, avoiding catastrophic forgetting. Then, GSFS dynamically shifts the testing sample toward the source domain, mitigating error accumulation in an online manner. In addition, CPR is proposed to pull the nearest learnable prototype close to the testing feature and push it away from other prototypes, making each prototype distinguishable during the adaptation. Experimental comparisons lead to a new benchmark, demonstrating PCoTTA's superiority in boosting the model's transferability towards the continually changing target domain. Our source code is available at: https://github.com/Jinec98/PCoTTA.
PCoTTA: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding
[ "Jincen Jiang", "Qianyu Zhou", "Yuhang Li", "Xinkui Zhao", "Meili Wang", "Lizhuang Ma", "Jian Chang", "Jian Jun Zhang", "Xuequan Lu" ]
NeurIPS.cc/2024/Conference
2411.00632
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=72tRD2Mfjd
@inproceedings{ ishfaq2024offline, title={Offline Multitask Representation Learning for Reinforcement Learning}, author={Haque Ishfaq and Thanh Nguyen-Tang and Songtao Feng and Raman Arora and Mengdi Wang and Ming Yin and Doina Precup}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=72tRD2Mfjd} }
We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.
Offline Multitask Representation Learning for Reinforcement Learning
[ "Haque Ishfaq", "Thanh Nguyen-Tang", "Songtao Feng", "Raman Arora", "Mengdi Wang", "Ming Yin", "Doina Precup" ]
NeurIPS.cc/2024/Conference
2403.11574
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6zROYoHlcp
@inproceedings{ zhou2024diffgs, title={Diff{GS}: Functional Gaussian Splatting Diffusion}, author={Junsheng Zhou and Weiqi Zhang and Yu-Shen Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6zROYoHlcp} }
3D Gaussian Splatting (3DGS) has shown convincing performance in rendering speed and fidelity, yet the generation of Gaussian Splatting remains a challenge due to its discreteness and unstructured nature. In this work, we propose DiffGS, a general Gaussian generator based on latent diffusion models. DiffGS is a powerful and efficient 3D generative model which is capable of generating Gaussian primitives at arbitrary numbers for high-fidelity rendering with rasterization. The key insight is to represent Gaussian Splatting in a disentangled manner via three novel functions to model Gaussian probabilities, colors and transforms. Through the novel disentanglement of 3DGS, we represent the discrete and unstructured 3DGS with continuous Gaussian Splatting functions, where we then train a latent diffusion model with the target of generating these Gaussian Splatting functions both unconditionally and conditionally. Meanwhile, we introduce a discretization algorithm to extract Gaussians at arbitrary numbers from the generated functions via octree-guided sampling and optimization. We explore DiffGS for various tasks, including unconditional generation, conditional generation from text, image, and partial 3DGS, as well as Point-to-Gaussian generation. We believe that DiffGS provides a new direction for flexibly modeling and generating Gaussian Splatting. Project page: https://junshengzhou.github.io/DiffGS.
DiffGS: Functional Gaussian Splatting Diffusion
[ "Junsheng Zhou", "Weiqi Zhang", "Yu-Shen Liu" ]
NeurIPS.cc/2024/Conference
2410.19657
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6zOKbzjBO4
@inproceedings{ erez2024fast, title={Fast Rates for Bandit {PAC} Multiclass Classification}, author={Liad Erez and Alon Cohen and Tomer Koren and Yishay Mansour and Shay Moran}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6zOKbzjBO4} }
We study multiclass PAC learning with bandit feedback, where inputs are classified into one of $K$ possible labels and feedback is limited to whether or not the predicted labels are correct. Our main contribution is in designing a novel learning algorithm for the agnostic $(\varepsilon,\delta)$-PAC version of the problem, with sample complexity of $O\big( (\operatorname{poly}(K) + 1 / \varepsilon^2) \log (|\mathcal{H}| / \delta) \big)$ for any finite hypothesis class $\mathcal{H}$. In terms of the leading dependence on $\varepsilon$, this improves upon existing bounds for the problem, that are of the form $O(K/\varepsilon^2)$. We also provide an extension of this result to general classes and establish similar sample complexity bounds in which $\log |\mathcal{H}|$ is replaced by the Natarajan dimension. This matches the optimal rate in the full-information version of the problem and resolves an open question studied by Daniely, Sabato, Ben-David, and Shalev-Shwartz (2011) who demonstrated that the multiplicative price of bandit feedback in realizable PAC learning is $\Theta(K)$. We complement this by revealing a stark contrast with the agnostic case, where the price of bandit feedback is only $O(1)$ as $\varepsilon \to 0$. Our algorithm utilizes a stochastic optimization technique to minimize a log-barrier potential based on Frank-Wolfe updates for computing a low-variance exploration distribution over the hypotheses, and is made computationally efficient provided access to an ERM oracle over $\mathcal{H}$.
Fast Rates for Bandit PAC Multiclass Classification
[ "Liad Erez", "Alon Cohen", "Tomer Koren", "Yishay Mansour", "Shay Moran" ]
NeurIPS.cc/2024/Conference
2406.12406
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6vNPPtWH1Q
@inproceedings{ fuchsgruber2024energybased, title={Energy-based Epistemic Uncertainty for Graph Neural Networks}, author={Dominik Fuchsgruber and Tom Wollschl{\"a}ger and Stephan G{\"u}nnemann}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6vNPPtWH1Q} }
In domains with interdependent data, such as graphs, quantifying the epistemic uncertainty of a Graph Neural Network (GNN) is challenging as uncertainty can arise at different structural scales. Existing techniques neglect this issue or only distinguish between structure-aware and structure-agnostic uncertainty without combining them into a single measure. We propose GEBM, an energy-based model (EBM) that provides high-quality uncertainty estimates by aggregating energy at different structural levels that naturally arise from graph diffusion. In contrast to logit-based EBMs, we provably induce an integrable density in the data space by regularizing the energy function. We introduce an evidential interpretation of our EBM that significantly improves the predictive robustness of the GNN. Our framework is a simple and effective post hoc method applicable to any pre-trained GNN that is sensitive to various distribution shifts. It consistently achieves the best separation of in-distribution and out-of-distribution data on 6 out of 7 anomaly types while having the best average rank over shifts on *all* datasets.
Energy-based Epistemic Uncertainty for Graph Neural Networks
[ "Dominik Fuchsgruber", "Tom Wollschläger", "Stephan Günnemann" ]
NeurIPS.cc/2024/Conference
2406.04043
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=6vDYsXn0Dl
@inproceedings{ zou2024linear, title={Linear Time Approximation Algorithm for Column Subset Selection with Local Search}, author={YuanBin Zou and Ziyun Huang and Jinhui Xu and Jianxin Wang and Qilong Feng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6vDYsXn0Dl} }
The Column Subset Selection (CSS) problem has been widely studied in dimensionality reduction and feature selection. The goal of the CSS problem is to output a submatrix S, consisting of k columns from an n×d input matrix A that minimizes the residual error ‖A-SS^\dagger A‖_F^2, where S^\dagger is the Moore-Penrose inverse matrix of S. Many previous approximation algorithms have non-linear running times in both n and d, while the existing linear-time algorithms have a relatively larger approximation ratios. Additionally, the local search algorithms in existing results for solving the CSS problem are heuristic. To achieve linear running time while maintaining better approximation using a local search strategy, we propose a local search-based approximation algorithm for the CSS problem with exactly k columns selected. A key challenge in achieving linear running time with the local search strategy is how to avoid exhaustive enumerations of candidate columns for constructing swap pairs in each local search step. To address this issue, we propose a two-step mixed sampling method that reduces the number of enumerations for swap pair construction from O(dk) to k in linear time. Although the two-step mixed sampling method reduces the search space of local search strategy, bounding the residual error after swaps is a non-trivial task. To estimate the changes in residual error after swaps, we propose a matched swap pair construction method to bound the approximation loss, ensuring a constant probability of loss reduction in each local search step. In expectation, these techniques enable us to obtain the local search algorithm for the CSS problem with theoretical guarantees, where a 53(k+1)-approximate solution can be obtained in linear running time O(ndk^4\log k). Empirical experiments show that our proposed algorithm achieves better quality and time compared to previous algorithms on both small and large datasets. Moreover, it is at least 10 times faster than state-of-the-art algorithms across all large-scale datasets.
Linear Time Approximation Algorithm for Column Subset Selection with Local Search
[ "YuanBin Zou", "Ziyun Huang", "Jinhui Xu", "Jianxin Wang", "Qilong Feng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6uv9ViIoMj
@inproceedings{ kim2024towards, title={Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers}, author={Junhan Kim and Chungman Lee and Eulrang Cho and Kyungphil Park and Ho-young Kim and Joonyoung Kim and Yongkweon Jeon}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6uv9ViIoMj} }
With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile and TVs. Existing PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyperparameter tunings are required. As a cost-effective alternative, learning-free PTQ schemes have been proposed. However, the performance is somewhat limited because they cannot consider the inter-layer dependency within the attention module, which is a significant feature of Transformers. In this paper, we thus propose a novel PTQ algorithm that balances accuracy and efficiency. The key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while targeting attention-wise reconstruction to consider the cross-layer dependency. Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models. The code will be available at https: //github.com/SamsungLabs/aespa.
Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers
[ "Junhan Kim", "Chungman Lee", "Eulrang Cho", "Kyungphil Park", "Ho-young Kim", "Joonyoung Kim", "Yongkweon Jeon" ]
NeurIPS.cc/2024/Conference
2402.08958
[ "" ]
https://huggingface.co/papers/2402.08958
1
3
1
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=6uRrwWhZlM
@inproceedings{ wu2024prompt, title={Prompt Optimization with {EASE}? Efficient Ordering-aware Automated Selection of Exemplars}, author={Zhaoxuan Wu and Xiaoqiang Lin and Zhongxiang Dai and Wenyang Hu and Yao Shu and See-Kiong Ng and Patrick Jaillet and Bryan Kian Hsiang Low}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6uRrwWhZlM} }
Large language models (LLMs) have shown impressive capabilities in real-world applications. The capability of *in-context learning* (ICL) allows us to adapt an LLM to downstream tasks by including input-label exemplars in the prompt without model fine-tuning. However, the quality of these exemplars in the prompt greatly impacts performance, highlighting the need for an effective automated exemplar selection method. Recent studies have explored retrieval-based approaches to select exemplars tailored to individual test queries, which can be undesirable due to extra test-time computation and an increased risk of data exposure. Moreover, existing methods fail to adequately account for the impact of exemplar ordering on the performance. On the other hand, the impact of the *instruction*, another essential component in the prompt given to the LLM, is often overlooked in existing exemplar selection methods. To address these challenges, we propose a novel method named $\texttt{EASE}$, which leverages the hidden embedding from a pre-trained language model to represent ordered sets of exemplars and uses a neural bandit algorithm to optimize the sets of exemplars *while accounting for exemplar ordering*. Our $\texttt{EASE}$ can efficiently find an ordered set of exemplars that *performs well for all test queries* from a given task, thereby eliminating test-time computation. Importantly, $\texttt{EASE}$ can be readily extended to *jointly optimize both the exemplars and the instruction*. Through extensive empirical evaluations (including novel tasks), we demonstrate the superiority of $\texttt{EASE}$ over existing methods, and reveal practical insights about the impact of exemplar selection on ICL, which may be of independent interest. Our code is available at https://github.com/ZhaoxuanWu/EASE-Prompt-Optimization.
Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars
[ "Zhaoxuan Wu", "Xiaoqiang Lin", "Zhongxiang Dai", "Wenyang Hu", "Yao Shu", "See-Kiong Ng", "Patrick Jaillet", "Bryan Kian Hsiang Low" ]
NeurIPS.cc/2024/Conference
2405.16122
[ "https://github.com/zhaoxuanwu/ease-prompt-optimization" ]
https://huggingface.co/papers/2405.16122
2
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=6sIOBDwr6d
@inproceedings{ moran2024consensus, title={Consensus Learning with Deep Sets for Essential Matrix Estimation}, author={Dror Moran and Yuval Margalit and Guy Trostianetsky and Fadi Khatib and Meirav Galun and Ronen Basri}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6sIOBDwr6d} }
Robust estimation of the essential matrix, which encodes the relative position and orientation of two cameras, is a fundamental step in structure from motion pipelines. Recent deep-based methods achieved accurate estimation by using complex network architectures that involve graphs, attention layers, and hard pruning steps. Here, we propose a simpler network architecture based on Deep Sets. Given a collection of point matches extracted from two images, our method identifies outlier point matches and models the displacement noise in inlier matches. A weighted DLT module uses these predictions to regress the essential matrix. Our network achieves accurate recovery that is superior to existing networks with significantly more complex architectures.
Consensus Learning with Deep Sets for Essential Matrix Estimation
[ "Dror Moran", "Yuval Margalit", "Guy Trostianetsky", "Fadi Khatib", "Meirav Galun", "Ronen Basri" ]
NeurIPS.cc/2024/Conference
2406.17414
[ "https://github.com/drormoran/NACNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6qr3932RWe
@inproceedings{ li2024memorize, title={Memorize What Matters: Emergent Scene Decomposition from Multitraverse}, author={Yiming Li and Zehong Wang and Yue Wang and Zhiding Yu and Zan Gojcic and Marco Pavone and Chen Feng and Jose M. Alvarez}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6qr3932RWe} }
Humans naturally retain memories of permanent elements, while ephemeral moments often slip through the cracks of memory. This selective retention is crucial for robotic perception, localization, and mapping. To endow robots with this capability, we introduce 3D Gaussian Mapping (3DGM), a self-supervised, camera-only offline mapping framework grounded in 3D Gaussian Splatting. 3DGM converts multitraverse RGB videos from the same region into a Gaussian-based environmental map while concurrently performing 2D ephemeral object segmentation. Our key observation is that the environment remains consistent across traversals, while objects frequently change. This allows us to exploit self-supervision from repeated traversals to achieve environment-object decomposition. More specifically, 3DGM formulates multitraverse environmental mapping as a robust 3D representation learning problem, treating pixels of the environment and objects as inliers and outliers, respectively. Using robust feature distillation, feature residual mining, and robust optimization, 3DGM simultaneously performs 2D segmentation and 3D mapping without human intervention. We build the Mapverse benchmark, sourced from the Ithaca365 and nuPlan datasets, to evaluate our method in unsupervised 2D segmentation, 3D reconstruction, and neural rendering. Extensive results verify the effectiveness and potential of our method for self-driving and robotics.
Memorize What Matters: Emergent Scene Decomposition from Multitraverse
[ "Yiming Li", "Zehong Wang", "Yue Wang", "Zhiding Yu", "Zan Gojcic", "Marco Pavone", "Chen Feng", "Jose M. Alvarez" ]
NeurIPS.cc/2024/Conference
2405.17187
[ "https://github.com/nvlabs/3dgm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=6pTlXqrO0p
@inproceedings{ cheng2024xrag, title={x{RAG}: Extreme Context Compression for Retrieval-augmented Generation with One Token}, author={Xin Cheng and Xun Wang and Xingxing Zhang and Tao Ge and Si-Qing Chen and Furu Wei and Huishuai Zhang and Dongyan Zhao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6pTlXqrO0p} }
This paper introduces xRAG, an innovative context compression method tailored for retrieval-augmented generation. xRAG reinterprets document embeddings in dense retrieval--traditionally used solely for retrieval--as features from the retrieval modality. By employing a modality fusion methodology, xRAG seamlessly integrates these embeddings into the language model representation space, effectively eliminating the need for their textual counterparts and achieving an extreme compression rate. In xRAG, the only trainable component is the modality bridge, while both the retriever and the language model remain frozen. This design choice allows for the reuse of offline-constructed document embeddings and preserves the plug-and-play nature of retrieval augmentation. Experimental results demonstrate that xRAG achieves an average improvement of over 10% across six knowledge-intensive tasks, adaptable to various language model backbones, ranging from a dense 7B model to an 8x7B Mixture of Experts configuration. xRAG not only significantly outperforms previous context compression methods but also matches the performance of uncompressed models on several datasets, while reducing overall FLOPs by a factor of 3.53. Our work pioneers new directions in retrieval-augmented generation from the perspective of multimodality fusion, and we hope it lays the foundation for future efficient and scalable retrieval-augmented systems.
xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token
[ "Xin Cheng", "Xun Wang", "Xingxing Zhang", "Tao Ge", "Si-Qing Chen", "Furu Wei", "Huishuai Zhang", "Dongyan Zhao" ]
NeurIPS.cc/2024/Conference
2405.13792
[ "https://github.com/Hannibal046/xRAG" ]
https://huggingface.co/papers/2405.13792
0
1
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=6osgTNnAZQ
@inproceedings{ ho2024block, title={Block Transformer: Global-to-Local Language Modeling for Fast Inference}, author={Namgyu Ho and Sangmin Bae and Taehyeon Kim and hyunjik.jo and Yireun Kim and Tal Schuster and Adam Fisch and James Thorne and Se-Young Yun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6osgTNnAZQ} }
We introduce the Block Transformer which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks associated with self-attention. Self-attention requires the key-value (KV) cache of all previous sequences to be retrieved from memory at every decoding step to retrieve context information, leading to two primary bottlenecks during batch inference. First, there is a significant delay in obtaining the first token, as the information of the entire prompt must first be processed to prefill the KV cache. Second, computation of subsequent tokens is bottlenecked by the high memory I/O demand of fetching the entire KV cache, which grows linearly with sequence length, incurring quadratic memory reads overall. We design the Block Transformer to strategically mitigate these costs, by incorporating coarsity and locality into an integrated global-to-local architecture. At the lower layers, we aggregate tokens into fixed size blocks to apply attention across the entire sequence at coarse-grained detail, to capture the global context while minimizing KV cache overhead. At upper layers, we apply attention within each block to decode individual tokens, to model fine-grained details with a lightweight local KV cache. We pretrain vanilla and Block Transformers from scratch and demonstrate that Block Transformers reach 10--20x inference throughput compared to vanilla transformers with equivalent perplexity and zero-shot task performance.
Block Transformer: Global-to-Local Language Modeling for Fast Inference
[ "Namgyu Ho", "Sangmin Bae", "Taehyeon Kim", "hyunjik.jo", "Yireun Kim", "Tal Schuster", "Adam Fisch", "James Thorne", "Se-Young Yun" ]
NeurIPS.cc/2024/Conference
2406.02657
[ "https://github.com/itsnamgyu/block-transformer" ]
https://huggingface.co/papers/2406.02657
6
37
1
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=6n709MszkP
@inproceedings{ heeg2024using, title={Using Time-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs}, author={Franziska Heeg and Ingo Scholtes}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6n709MszkP} }
Node centralities play a pivotal role in network science, social network analysis, and recommender systems. In temporal data, static path-based centralities like closeness or betweenness can give misleading results about the true importance of nodes in a temporal graph. To address this issue, temporal generalizations of betweenness and closeness have been defined that are based on the shortest time-respecting paths between pairs of nodes. However, a major issue of those generalizations is that the calculation of such paths is computationally expensive. Addressing this issue, we study the application of De Bruijn Graph Neural Networks (DBGNN), a time-aware graph neural network architecture, to predict temporal path-based centralities in time series data. We experimentally evaluate our approach in 13 temporal graphs from biological and social systems and show that it considerably improves the prediction of betweenness and closeness centrality compared to (i) a static Graph Convolutional Neural Network, (ii) an efficient sampling-based approximation technique for temporal betweenness, and (iii) two state-of-the-art time-aware graph learning techniques for dynamic graphs.
Using Time-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs
[ "Franziska Heeg", "Ingo Scholtes" ]
NeurIPS.cc/2024/Conference
2310.15865
[ "https://github.com/pathpy/pathpyg" ]
https://huggingface.co/papers/2310.15865
0
0
0
2
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=6lx34fpanw
@inproceedings{ zhang2024improving, title={Improving Generalization in Federated Learning with Model-Data Mutual Information Regularization: A Posterior Inference Approach}, author={Hao Zhang and Chenglin Li and Nuowen Kan and Ziyang Zheng and Wenrui Dai and Junni Zou and Hongkai Xiong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6lx34fpanw} }
Most of existing federated learning (FL) formulation is treated as a point-estimate of models, inherently prone to overfitting on scarce client-side data with overconfident decisions. Though Bayesian inference can alleviate this issue, a direct posterior inference at clients may result in biased local posterior estimates due to data heterogeneity, leading to a sub-optimal global posterior. From an information-theoretic perspective, we propose FedMDMI, a federated posterior inference framework based on model-data mutual information (MI). Specifically, a global model-data MI term is introduced as regularization to enforce the global model to learn essential information from the heterogeneous local data, alleviating the bias caused by data heterogeneity and hence enhancing generalization. To make this global MI tractable, we decompose it into local MI terms at the clients, converting the global objective with MI regularization into several locally optimizable objectives based on local data. For these local objectives, we further show that the optimal local posterior is a Gibbs posterior, which can be efficiently sampled with stochastic gradient Langevin dynamics methods. Finally, at the server, we approximate sampling from the global Gibbs posterior by simply averaging samples from the local posteriors. Theoretical analysis provides a generalization bound for FL w.r.t. the model-data MI, which, at different levels of regularization, represents a federated version of the bias-variance trade-off. Experimental results demonstrate a better generalization behavior with better calibrated uncertainty estimates of FedMDMI.
Improving Generalization in Federated Learning with Model-Data Mutual Information Regularization: A Posterior Inference Approach
[ "Hao Zhang", "Chenglin Li", "Nuowen Kan", "Ziyang Zheng", "Wenrui Dai", "Junni Zou", "Hongkai Xiong" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6lwKOvL3KN
@inproceedings{ khandelwal2024adaptive, title={Adaptive Visual Scene Understanding: Incremental Scene Graph Generation}, author={Naitik Khandelwal and Xiao Liu and Mengmi Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6lwKOvL3KN} }
Scene graph generation (SGG) analyzes images to extract meaningful information about objects and their relationships. In the dynamic visual world, it is crucial for AI systems to continuously detect new objects and establish their relationships with existing ones. Recently, numerous studies have focused on continual learning within the domains of object detection and image recognition. However, a limited amount of research focuses on a more challenging continual learning problem in SGG. This increased difficulty arises from the intricate interactions and dynamic relationships among objects, and their associated contexts. Thus, in continual learning, SGG models are often required to expand, modify, retain, and reason scene graphs within the process of adaptive visual scene understanding. To systematically explore Continual Scene Graph Generation (CSEGG), we present a comprehensive benchmark comprising three learning regimes: relationship incremental, scene incremental, and relationship generalization. Moreover, we introduce a ``Replays via Analysis by Synthesis" method named RAS. This approach leverages the scene graphs, decomposes and re-composes them to represent different scenes, and replays the synthesized scenes based on these compositional scene graphs. The replayed synthesized scenes act as a means to practice and refine proficiency in SGG in known and unknown environments. Our experimental results not only highlight the challenges of directly combining existing continual learning methods with SGG backbones but also demonstrate the effectiveness of our proposed approach, enhancing CSEGG efficiency while simultaneously preserving privacy and memory usage. All data and source code will be made public.
Adaptive Visual Scene Understanding: Incremental Scene Graph Generation
[ "Naitik Khandelwal", "Xiao Liu", "Mengmi Zhang" ]
NeurIPS.cc/2024/Conference
2310.01636
[ "https://github.com/zhanglab-deepneurocoglab/csegg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6jOScqwdHU
@inproceedings{ davis2024fisher, title={Fisher Flow Matching for Generative Modeling over Discrete Data}, author={Oscar Davis and Samuel Kessler and Mircea Petrache and Ismail Ilkan Ceylan and Michael M. Bronstein and Joey Bose}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6jOScqwdHU} }
Generative modeling over discrete data has recently seen numerous success stories, with applications spanning language modeling, biological sequence design, and graph-structured molecular data. The predominant generative modeling paradigm for discrete data is still autoregressive, with more recent alternatives based on diffusion or flow-matching falling short of their impressive performance in continuous data settings, such as image or video generation. In this work, we introduce Fisher-Flow, a novel flow-matching model for discrete data. Fisher-Flow takes a manifestly geometric perspective by considering categorical distributions over discrete data as points residing on a statistical manifold equipped with its natural Riemannian metric: the \emph{Fisher-Rao metric}. As a result, we demonstrate discrete data itself can be continuously reparameterised to points on the positive orthant of the $d$-hypersphere $\mathbb{S}^d_+$, which allows us to define flows that map any source distribution to target in a principled manner by transporting mass along (closed-form) geodesics of $\mathbb{S}^d_+$. Furthermore, the learned flows in Fisher-Flow can be further bootstrapped by leveraging Riemannian optimal transport leading to improved training dynamics. We prove that the gradient flow induced by Fisher-FLow is optimal in reducing the forward KL divergence. We evaluate Fisher-Flow on an array of synthetic and diverse real-world benchmarks, including designing DNA Promoter, and DNA Enhancer sequences. Empirically, we find that Fisher-Flow improves over prior diffusion and flow-matching models on these benchmarks.
Fisher Flow Matching for Generative Modeling over Discrete Data
[ "Oscar Davis", "Samuel Kessler", "Mircea Petrache", "Ismail Ilkan Ceylan", "Michael M. Bronstein", "Joey Bose" ]
NeurIPS.cc/2024/Conference
2405.14664
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6hY60tkiEK
@inproceedings{ bhardwaj2024sparse, title={Sparse High Rank Adapters}, author={Kartikeya Bhardwaj and Nilesh Prasad Pandey and Sweta Priyadarshi and Viswanath Ganapathy and Shreya Kadambi and Rafael Esteves and Shubhankar Borse and Paul Whatmough and Risheek Garrepalli and Mart Van Baalen and Harris Teague and Markus Nagel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6hY60tkiEK} }
Low Rank Adaptation (LoRA) has gained massive attention in the recent generative AI research. One of the main advantages of LoRA is its ability to be fused with pretrained models, adding no overhead during inference. However, from a mobile deployment standpoint, we can either avoid inference overhead in the fused mode but lose the ability to switch adapters rapidly, or suffer significant (up to 30% higher) inference latency while enabling rapid switching in the unfused mode. LoRA also exhibits concept-loss when multiple adapters are used concurrently. In this paper, we propose Sparse High Rank Adapters (SHiRA), a new paradigm which incurs no inference overhead, enables rapid switching, and significantly reduces concept-loss. Specifically, SHiRA can be trained by directly tuning only 1-2% of the base model weights while leaving others unchanged. This results in a highly sparse adapter which can be switched directly in the fused mode. We further provide theoretical and empirical insights on how high sparsity in SHiRA can aid multi-adapter fusion by reducing concept loss. Our extensive experiments on LVMs and LLMs demonstrate that finetuning only a small fraction of the parameters in the base model significantly outperforms LoRA while enabling both rapid switching and multi-adapter fusion. Finally, we provide a latency- and memory-efficient SHiRA implementation based on Parameter-Efficient Finetuning (PEFT) Library which trains at nearly the same speed as LoRA while consuming up to 16% lower peak GPU memory, thus making SHiRA easy to adopt for practical use cases. To demonstrate rapid switching benefits during inference, we show that loading SHiRA on a base model can be 5x-16x faster than LoRA fusion on a CPU.
Sparse High Rank Adapters
[ "Kartikeya Bhardwaj", "Nilesh Prasad Pandey", "Sweta Priyadarshi", "Viswanath Ganapathy", "Shreya Kadambi", "Rafael Esteves", "Shubhankar Borse", "Paul Whatmough", "Risheek Garrepalli", "Mart Van Baalen", "Harris Teague", "Markus Nagel" ]
NeurIPS.cc/2024/Conference
2406.13175
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6gzPSMUAz2
@inproceedings{ yu2024mates, title={{MATES}: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models}, author={Zichun Yu and Spandan Das and Chenyan Xiong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6gzPSMUAz2} }
Pretraining data selection has the potential to improve language model pretraining efficiency by utilizing higher-quality data from massive web data corpora. Current data selection methods, which rely on either hand-crafted rules or larger reference models, are conducted statically and do not capture the evolving data preferences during pretraining. In this paper, we introduce *model-aware data selection with data influence models (MATES)*, where a data influence model continuously adapts to the evolving data preferences of the pretraining model and then selects the data most effective for the current pretraining progress. Specifically, we collect oracle data influence by locally probing the pretraining model and fine-tune a small data influence model to approximate it accurately. The data influence model then predicts data influence over the whole pretraining corpus and selects the most influential data for the next pretraining stage. Experiments of pretraining 410M and 1B models on the C4 dataset demonstrate that MATES significantly outperforms random data selection on extensive downstream tasks. It doubles the gains achieved by the state-of-the-art data selection approach that leverages larger reference models and reduces the total FLOPs required to reach certain performances by half. Further analyses validate the effectiveness of the locally probed oracle data influence and the approximation with data influence models. Our code is open-sourced at https://github.com/cxcscmu/MATES.
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models
[ "Zichun Yu", "Spandan Das", "Chenyan Xiong" ]
NeurIPS.cc/2024/Conference
2406.06046
[ "https://github.com/cxcscmu/mates" ]
https://huggingface.co/papers/2406.06046
0
0
0
3
[ "yuzc19/bert-base-uncased-data-influence-model-lambada", "yuzc19/pythia-410m-mates" ]
[]
[]
[ "yuzc19/bert-base-uncased-data-influence-model-lambada", "yuzc19/pythia-410m-mates" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=6gMnj9oc6d
@inproceedings{ chua2024scalable, title={Scalable {DP}-{SGD}: Shuffling vs. Poisson Subsampling}, author={Lynn Chua and Badih Ghazi and Pritish Kamath and Ravi Kumar and Pasin Manurangsi and Amer Sinha and Chiyuan Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6gMnj9oc6d} }
We provide new lower bounds on the privacy guarantee of _multi-epoch_ Adaptive Batch Linear Queries (ABLQ) mechanism with _shuffled batch sampling_, demonstrating substantial gaps when compared to _Poisson subsampling_; prior analysis was limited to a single epoch. Since the privacy analysis of Differentially Private Stochastic Gradient Descent (DP-SGD) is obtained by analyzing the ABLQ mechanism, this brings into serious question the common practice of implementing Shuffling based DP-SGD, but reporting privacy parameters as if Poisson subsampling was used. To understand the impact of this gap on the utility of trained machine learning models, we introduce a novel practical approach to implement Poisson subsampling _at scale_ using massively parallel computation, and efficiently train models with the same. We provide a comparison between the utility of models trained with Poisson subsampling based DP-SGD, and the optimistic estimates of utility when using shuffling, via our new lower bounds on the privacy guarantee of ABLQ with shuffling.
Scalable DP-SGD: Shuffling vs. Poisson Subsampling
[ "Lynn Chua", "Badih Ghazi", "Pritish Kamath", "Ravi Kumar", "Pasin Manurangsi", "Amer Sinha", "Chiyuan Zhang" ]
NeurIPS.cc/2024/Conference
2411.04205
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6gIcnPvw2x
@inproceedings{ jung2024complete, title={Complete Graphical Criterion for Sequential Covariate Adjustment in Causal Inference}, author={Yonghan Jung and Min Woo Park and Sanghack Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6gIcnPvw2x} }
Covariate adjustment, also known as back-door adjustment, is a fundamental tool in causal inference. Although a sound and complete graphical identification criterion, known as the adjustment criterion (Shpitser, 2010), exists for static contexts, sequential contexts present challenges. Current practices, such as the sequential back-door adjustment (Pearl, 1995) or multi-outcome sequential back-door adjustment (Jung, 2020), are sound but incomplete; i.e., there are graphical scenarios where the causal effect is expressible via covariate adjustment, yet these criteria do not cover. In this paper, we exemplify this incompleteness and then present the *sequential adjustment criterion*, a sound and complete criterion for sequential covariate adjustment. We provide a constructive sequential adjustment criterion that identifies a set that satisfies the sequential adjustment criterion if and only if the causal effect can be expressed as a sequential covariate adjustment. Finally, we present an algorithm for identifying a *minimal* sequential covariate adjustment set, which optimizes efficiency by ensuring that no unnecessary vertices are included.
Complete Graphical Criterion for Sequential Covariate Adjustment in Causal Inference
[ "Yonghan Jung", "Min Woo Park", "Sanghack Lee" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6eoGVqMiIj
@inproceedings{ ai2024dreamclear, title={DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation}, author={Yuang Ai and Xiaoqiang Zhou and Huaibo Huang and Xiaotian Han and Zhengyu Chen and Quanzeng You and Hongxia Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6eoGVqMiIj} }
Image restoration (IR) in real-world scenarios presents significant challenges due to the lack of high-capacity models and comprehensive datasets. To tackle these issues, we present a dual strategy: GenIR, an innovative data curation pipeline, and DreamClear, a cutting-edge Diffusion Transformer (DiT)-based image restoration model. **GenIR**, our pioneering contribution, is a dual-prompt learning pipeline that overcomes the limitations of existing datasets, which typically comprise only a few thousand images and thus offer limited generalizability for larger models. GenIR streamlines the process into three stages: image-text pair construction, dual-prompt based fine-tuning, and data generation \& filtering. This approach circumvents the laborious data crawling process, ensuring copyright compliance and providing a cost-effective, privacy-safe solution for IR dataset construction. The result is a large-scale dataset of one million high-quality images. Our second contribution, **DreamClear**, is a DiT-based image restoration model. It utilizes the generative priors of text-to-image (T2I) diffusion models and the robust perceptual capabilities of multi-modal large language models (MLLMs) to achieve photorealistic restoration. To boost the model's adaptability to diverse real-world degradations, we introduce the Mixture of Adaptive Modulator (MoAM). It employs token-wise degradation priors to dynamically integrate various restoration experts, thereby expanding the range of degradations the model can address. Our exhaustive experiments confirm DreamClear's superior performance, underlining the efficacy of our dual strategy for real-world image restoration. Code and pre-trained models are available at: https://github.com/shallowdream204/DreamClear.
DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
[ "Yuang Ai", "Xiaoqiang Zhou", "Huaibo Huang", "Xiaotian Han", "Zhengyu Chen", "Quanzeng You", "Hongxia Yang" ]
NeurIPS.cc/2024/Conference
2410.18666
[ "https://github.com/shallowdream204/dreamclear" ]
https://huggingface.co/papers/2410.18666
3
18
3
7
[ "shallowdream204/DreamClear", "camenduru/DreamClear" ]
[]
[]
[ "shallowdream204/DreamClear", "camenduru/DreamClear" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=6emETARnWi
@inproceedings{ ouyang2024transfer, title={Transfer Learning for Diffusion Models}, author={Yidong Ouyang and Liyan Xie and Hongyuan Zha and Guang Cheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6emETARnWi} }
Diffusion models, a specific type of generative model, have achieved unprecedented performance in recent years and consistently produce high-quality synthetic samples. A critical prerequisite for their notable success lies in the presence of a substantial number of training samples, which can be impractical in real-world applications due to high collection costs or associated risks. Consequently, various finetuning and regularization approaches have been proposed to transfer knowledge from existing pre-trained models to specific target domains with limited data. This paper introduces the Transfer Guided Diffusion Process (TGDP), a novel approach distinct from conventional finetuning and regularization methods. We prove that the optimal diffusion model for the target domain integrates pre-trained diffusion models on the source domain with additional guidance from a domain classifier. We further extend TGDP to a conditional version for modeling the joint distribution of data and its corresponding labels, together with two additional regularization terms to enhance the model performance. We validate the effectiveness of TGDP on both simulated and real-world datasets.
Transfer Learning for Diffusion Models
[ "Yidong Ouyang", "Liyan Xie", "Hongyuan Zha", "Guang Cheng" ]
NeurIPS.cc/2024/Conference
2405.16876
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6ejpSVIiIl
@inproceedings{ chen2024classifier, title={Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift}, author={Junbao Chen and Jingfeng Xue and Yong Wang and Zhenyan Liu and Lu Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6ejpSVIiIl} }
Data heterogeneity is one of the key challenges in federated learning, and many efforts have been devoted to tackling this problem. However, distributed concept drift with data heterogeneity, where clients may additionally experience different concept drifts, is a largely unexplored area. In this work, we focus on real drift, where the conditional distribution $P(\mathcal{Y}|\mathcal{X})$ changes. We first study how distributed concept drift affects the model training and find that local classifier plays a critical role in drift adaptation. Moreover, to address data heterogeneity, we study the feature alignment under distributed concept drift, and find two factors that are crucial for feature alignment: the conditional distribution $P(\mathcal{Y}|\mathcal{X})$ and the degree of data heterogeneity. Motivated by the above findings, we propose FedCCFA, a federated learning framework with classifier clustering and feature alignment. To enhance collaboration under distributed concept drift, FedCCFA clusters local classifiers at class-level and generates clustered feature anchors according to the clustering results. Assisted by these anchors, FedCCFA adaptively aligns clients' feature spaces based on the entropy of label distribution $P(\mathcal{Y})$, alleviating the inconsistency in feature space. Our results demonstrate that FedCCFA significantly outperforms existing methods under various concept drift settings. Code is available at https://github.com/Chen-Junbao/FedCCFA.
Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift
[ "Junbao Chen", "Jingfeng Xue", "Yong Wang", "Zhenyan Liu", "Lu Huang" ]
NeurIPS.cc/2024/Conference
2410.18478
[ "https://github.com/chen-junbao/fedccfa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6dYBP3BIwx
@inproceedings{ wang2024cogvlm, title={Cog{VLM}: Visual Expert for Pretrained Language Models}, author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Song XiXuan and Jiazheng Xu and Keqin Chen and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6dYBP3BIwx} }
We introduce CogVLM, a powerful open-source visual language foundation model. Different from the popular \emph{shallow alignment} method which maps image features into the input space of language model, CogVLM bridges the gap between the frozen pretrained language model and image encoder by a trainable visual expert module in the attention and FFN layers. As a result, CogVLM enables a deep fusion of vision language features without sacrificing any performance on NLP tasks. CogVLM-17B achieves state-of-the-art performance on 17 classic cross-modal benchmarks, including 1) image captioning datasets: NoCaps, Flicker30k, 2) VQA datasets: OKVQA, TextVQA, OCRVQA, ScienceQA, 3) LVLM benchmarks: MM-Vet, MMBench, SEED-Bench, LLaVABench, POPE, MMMU, MathVista, 4) visual grounding datasets: RefCOCO, RefCOCO+, RefCOCOg, Visual7W. Codes and checkpoints are available at Github.
CogVLM: Visual Expert for Pretrained Language Models
[ "Weihan Wang", "Qingsong Lv", "Wenmeng Yu", "Wenyi Hong", "Ji Qi", "Yan Wang", "Junhui Ji", "Zhuoyi Yang", "Lei Zhao", "Song XiXuan", "Jiazheng Xu", "Keqin Chen", "Bin Xu", "Juanzi Li", "Yuxiao Dong", "Ming Ding", "Jie Tang" ]
NeurIPS.cc/2024/Conference
2311.03079
[ "https://github.com/thudm/cogvlm" ]
https://huggingface.co/papers/2311.03079
7
23
2
16
[ "THUDM/glm-4v-9b", "THUDM/visualglm-6b", "THUDM/cogvlm2-llama3-chat-19B", "THUDM/cogvlm-chat-hf", "THUDM/cogvlm2-llama3-chinese-chat-19B", "THUDM/cogagent-chat-hf", "THUDM/cogagent-vqa-hf", "THUDM/cogvlm2-llama3-chat-19B-int4", "THUDM/cogvlm-grounding-generalist-hf", "THUDM/cogvlm2-llama3-chinese-chat-19B-int4", "Rodeszones/CogVLM-grounding-generalist-hf-quant4", "THUDM/cogvlm-base-490-hf", "THUDM/cogvlm-base-224-hf", "vcadillo/glm-4v-9b-4-bits", "THUDM/cogvlm2-llama3-chat-19B-tgi", "Sundogs/image_to_text", "THUDM/cogvlm-grounding-base-hf", "THUDM/cogvlm2-llama3-chinese-chat-19B-tgi", "grim3000/cogvlm-chat-hf", "Starbourne/cogvlm-chat-hf", "Starbourne/cogvlm-grounding-generalist-hf", "TusharGoel/cogvlm2-19b-english-chat", "baiall/1" ]
[ "Salesforce/blip3-kale", "foundation-multimodal-models/DetailCaps-4870" ]
[ "vilarin/VL-Chatbox", "ShuoChen20/DimensionX", "1aurent/cogvlm_captionner", "Jimhugging/GLM-4-DOC", "thwri/CogFlorence-2", "Shinguitar/kohya_ss", "zengxi123/kohya_ss", "ABCCCYYY/kohya_ss", "frappuccino/GPT4V-Image-Captioner", "mrbeliever/Captain", "Jimhugging/CogVLM2-4-Doc", "humblemikey/thwri-CogFlorence-2", "Bonnie422/Glm-4v-9b_for_report_analyze", "Jar2023/basic_demo", "GrahamY/Chatbot_ChatGLM4", "jackbond2024/glm4", "muxingyin/VisualGLM-6B", "gangbosi/QYChatBot", "Havi999/FORAI", "gangbosi/ChatGLM-6B" ]
[ "THUDM/glm-4v-9b", "THUDM/visualglm-6b", "THUDM/cogvlm2-llama3-chat-19B", "THUDM/cogvlm-chat-hf", "THUDM/cogvlm2-llama3-chinese-chat-19B", "THUDM/cogagent-chat-hf", "THUDM/cogagent-vqa-hf", "THUDM/cogvlm2-llama3-chat-19B-int4", "THUDM/cogvlm-grounding-generalist-hf", "THUDM/cogvlm2-llama3-chinese-chat-19B-int4", "Rodeszones/CogVLM-grounding-generalist-hf-quant4", "THUDM/cogvlm-base-490-hf", "THUDM/cogvlm-base-224-hf", "vcadillo/glm-4v-9b-4-bits", "THUDM/cogvlm2-llama3-chat-19B-tgi", "Sundogs/image_to_text", "THUDM/cogvlm-grounding-base-hf", "THUDM/cogvlm2-llama3-chinese-chat-19B-tgi", "grim3000/cogvlm-chat-hf", "Starbourne/cogvlm-chat-hf", "Starbourne/cogvlm-grounding-generalist-hf", "TusharGoel/cogvlm2-19b-english-chat", "baiall/1" ]
[ "Salesforce/blip3-kale", "foundation-multimodal-models/DetailCaps-4870" ]
[ "vilarin/VL-Chatbox", "ShuoChen20/DimensionX", "1aurent/cogvlm_captionner", "Jimhugging/GLM-4-DOC", "thwri/CogFlorence-2", "Shinguitar/kohya_ss", "zengxi123/kohya_ss", "ABCCCYYY/kohya_ss", "frappuccino/GPT4V-Image-Captioner", "mrbeliever/Captain", "Jimhugging/CogVLM2-4-Doc", "humblemikey/thwri-CogFlorence-2", "Bonnie422/Glm-4v-9b_for_report_analyze", "Jar2023/basic_demo", "GrahamY/Chatbot_ChatGLM4", "jackbond2024/glm4", "muxingyin/VisualGLM-6B", "gangbosi/QYChatBot", "Havi999/FORAI", "gangbosi/ChatGLM-6B" ]
1
poster
null
https://openreview.net/forum?id=6cdYMkxxNt
@inproceedings{ mehra2024understanding, title={Understanding the Transferability of Representations via Task-Relatedness}, author={Akshay Mehra and Yunbei Zhang and Jihun Hamm}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6cdYMkxxNt} }
The growing popularity of transfer learning due to the availability of models pre-trained on vast amounts of data, makes it imperative to understand when the knowledge of these pre-trained models can be transferred to obtain high-performing models on downstream target tasks. However, the exact conditions under which transfer learning succeeds in a cross-domain cross-task setting are still poorly understood. To bridge this gap, we propose a novel analysis that analyzes the transferability of the representations of pre-trained models to downstream tasks in terms of their relatedness to a given reference task. Our analysis leads to an upper bound on transferability in terms of task-relatedness, quantified using the difference between the class priors, label sets, and features of the two tasks.Our experiments using state-of-the-art pre-trained models show the effectiveness of task-relatedness in explaining transferability on various vision and language tasks. The efficient computability of task-relatedness even without labels of the target task and its high correlation with the model's accuracy after end-to-end fine-tuning on the target task makes it a useful metric for transferability estimation. Our empirical results of using task-relatedness on the problem of selecting the best pre-trained model from a model zoo for a target task highlight its utility for practical problems.
Understanding the Transferability of Representations via Task-Relatedness
[ "Akshay Mehra", "Yunbei Zhang", "Jihun Hamm" ]
NeurIPS.cc/2024/Conference
2307.00823
[ "https://github.com/akshaymehra24/TaskTransferAnalysis" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6cWDg9t3z5
@inproceedings{ hanneke2024universal, title={Universal Rates of Empirical Risk Minimization}, author={Steve Hanneke and Mingyue Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6cWDg9t3z5} }
The well-known $\textit{empirical risk minimization}$ (ERM) principle is the basis of many widely used machine learning algorithms, and plays an essential role in the classical PAC theory. A common description of a learning algorithm's performance is its so-called “learning curve”, that is, the decay of the expected error as a function of the input sample size. As the PAC model fails to explain the behavior of learning curves, recent research has explored an alternative universal learning model and has ultimately revealed a distinction between optimal universal and uniform learning rates (Bousquet et al., 2021). However, a basic understanding of such differences with a particular focus on the ERM principle has yet to be developed. In this paper, we consider the problem of universal learning by ERM in the realizable case and study the possible universal rates. Our main result is a fundamental $\textit{tetrachotomy}$: there are only four possible universal learning rates by ERM, namely, the learning curves of any concept class learnable by ERM decay either at $e^{-n}$, $1/n$, $\log{(n)}/n$, or arbitrarily slow rates. Moreover, we provide a complete characterization of which concept classes fall into each of these categories, via new complexity structures. We also develop new combinatorial dimensions which supply sharp asymptotically-valid constant factors for these rates, whenever possible.
Universal Rates of Empirical Risk Minimization
[ "Steve Hanneke", "Mingyue Xu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6b6TfDBDOO
@inproceedings{ huang2024diffusion, title={Diffusion Imitation from Observation}, author={Bo-Ruei Huang and Chun-Kai Yang and Chun-Mao Lai and Dai-Jie Wu and Shao-Hua Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6b6TfDBDOO} }
Learning from Observation (LfO) aims to imitate experts by learning from state-only demonstrations without requiring action labels. Existing adversarial imitation learning approaches learn a generator agent policy to produce state transitions that are indistinguishable to a discriminator that learns to classify agent and expert state transitions. Despite its simplicity in formulation, these methods are often sensitive to hyperparameters and brittle to train. Motivated by the recent success of diffusion models in generative modeling, we propose to integrate a diffusion model into the adversarial imitation learning from observation framework. Specifically, we employ a diffusion model to capture expert and agent transitions by generating the next state, given the current state. Then, we reformulate the learning objective to train the diffusion model as a binary classifier and use it to provide ``realness'' rewards for policy learning. Our proposed framework, Diffusion Imitation from Observation (DIFO), demonstrates superior performance in various continuous control domains, including navigation, locomotion, manipulation, and games.
Diffusion Imitation from Observation
[ "Bo-Ruei Huang", "Chun-Kai Yang", "Chun-Mao Lai", "Dai-Jie Wu", "Shao-Hua Sun" ]
NeurIPS.cc/2024/Conference
2410.05429
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6aJrEC28hR
@inproceedings{ velasco2024graph, title={Graph neural networks and non-commuting operators}, author={Mauricio Velasco and Kaiying O'Hare and Bernardo Rychtenberg and Soledad Villar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6aJrEC28hR} }
Graph neural networks (GNNs) provide state-of-the-art results in a wide variety of tasks which typically involve predicting features at the vertices of a graph. They are built from layers of graph convolutions which serve as a powerful inductive bias for describing the flow of information among the vertices. Often, more than one data modality is available. This work considers a setting in which several graphs have the same vertex set and a common vertex-level learning task. This generalizes standard GNN models to GNNs with several graph operators that do not commute. We may call this model graph-tuple neural networks (GtNN). In this work, we develop the mathematical theory to address the stability and transferability of GtNNs using properties of non-commuting non-expansive operators. We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem that guarantees that all graph-tuple neural networks are transferable on convergent graph-tuple sequences. In particular, there is no non-transferable energy under the convergence we consider here. Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs (GtNNs) and provide a strict improvement on what is currently known even in the GNN case. We illustrate our theoretical results with simple experiments on synthetic and real-world data. To this end, we derive a training procedure that provably enforces the stability of the resulting model.
Graph neural networks and non-commuting operators
[ "Mauricio Velasco", "Kaiying O'Hare", "Bernardo Rychtenberg", "Soledad Villar" ]
NeurIPS.cc/2024/Conference
2411.04265
[ "https://github.com/kkylie/gtnn_weighted_circulant_graphs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6ZwJSk2kvU
@inproceedings{ li2024dreammeshd, title={DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation}, author={Zhiqi Li and Yiming Chen and Peidong Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6ZwJSk2kvU} }
Recent advancements in 2D/3D generative techniques have facilitated the generation of dynamic 3D objects from monocular videos. Previous methods mainly rely on the implicit neural radiance fields (NeRF) or explicit Gaussian Splatting as the underlying representation, and struggle to achieve satisfactory spatial-temporal consistency and surface appearance. Drawing inspiration from modern 3D animation pipelines, we introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video. Instead of utilizing classical texture map for appearance, we bind Gaussian splats to triangle face of mesh for differentiable optimization of both the texture and mesh vertices. In particular, DreamMesh4D begins with a coarse mesh obtained through an image-to-3D generation procedure. Sparse points are then uniformly sampled across the mesh surface, and are used to build a deformation graph to drive the motion of the 3D object for the sake of computational efficiency and providing additional constraint. For each step, transformations of sparse control points are predicted using a deformation network, and the mesh vertices as well as the surface Gaussians are deformed via a novel geometric skinning algorithm. The skinning algorithm is a hybrid approach combining LBS (linear blending skinning) and DQS (dual-quaternion skinning), mitigating drawbacks associated with both approaches. The static surface Gaussians and mesh vertices as well as the dynamic deformation network are learned via reference view photometric loss, score distillation loss as well as other regularization losses in a two-stage manner. Extensive experiments demonstrate superior performance of our method in terms of both rendering quality and spatial-temporal consistency. Furthermore, our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.
DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation
[ "Zhiqi Li", "Yiming Chen", "Peidong Liu" ]
NeurIPS.cc/2024/Conference
2410.06756
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6ZXrvoIox1
@inproceedings{ liu2024beating, title={Beating Adversarial Low-Rank {MDP}s with Unknown Transition and Bandit Feedback}, author={Haolin Liu and Zakaria Mhammedi and Chen-Yu Wei and Julian Zimmert}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6ZXrvoIox1} }
We consider regret minimization in low-rank MDPs with fixed transition and adversarial losses. Previous work has investigated this problem under either full-information loss feedback with unknown transitions (Zhao et al., 2024), or bandit loss feedback with known transitions (Foster et al., 2022). First, we improve the $poly(d, A, H)T^{5/6}$ regret bound of Zhao et al. (2024) to $poly(d, A, H)T^{2/3}$ for the full-information unknown transition setting, where $d$ is the rank of the transitions, $A$ is the number of actions, $H$ is the horizon length, and $T$ is the number of episodes. Next, we initiate the study on the setting with bandit loss feedback and unknown transitions. Assuming that the loss has a linear structure, we propose both model-based and model-free algorithms achieving $poly(d, A, H)T^{2/3}$ regret, though they are computationally inefficient. We also propose oracle-efficient model-free algorithms with $poly(d, A, H)T^{4/5}$ regret. We show that the linear structure is necessary for the bandit case—without structure on the reward function, the regret has to scale polynomially with the number of states. This is contrary to the full-information case (Zhao et al., 2024), where the regret can be independent of the number of states even for unstructured reward functions.
Beating Adversarial Low-Rank MDPs with Unknown Transition and Bandit Feedback
[ "Haolin Liu", "Zakaria Mhammedi", "Chen-Yu Wei", "Julian Zimmert" ]
NeurIPS.cc/2024/Conference
2411.06739
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6ZBHIEtdP4
@inproceedings{ meng2024pissa, title={Pi{SSA}: Principal Singular Values and Singular Vectors Adaptation of Large Language Models}, author={Fanxu Meng and Zhaohui Wang and Muhan Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6ZBHIEtdP4} }
To parameter-efficiently fine-tune (PEFT) large language models (LLMs), the low-rank adaptation (LoRA) method approximates the model changes $\Delta W \in \mathbb{R}^{m \times n}$ through the product of two matrices $A \in \mathbb{R}^{m \times r}$ and $B \in \mathbb{R}^{r \times n}$, where $r \ll \min(m, n)$, $A$ is initialized with Gaussian noise, and $B$ with zeros. LoRA **freezes the original model $W$** and **updates the "Noise \& Zero" adapter**, which may lead to slow convergence. To overcome this limitation, we introduce **P**r**i**ncipal **S**ingular values and **S**ingular vectors **A**daptation (PiSSA). PiSSA shares the same architecture as LoRA, but initializes the adaptor matrices $A$ and $B$ with the principal components of the original matrix $W$, and put the remaining components into a residual matrix $W^{res} \in \mathbb{R}^{m \times n}$ which is frozen during fine-tuning. Compared to LoRA, PiSSA **updates the principal components** while **freezing the "residual" parts**, allowing faster convergence and enhanced performance. Comparative experiments of PiSSA and LoRA across 11 different models, ranging from 184M to 70B, encompassing 5 NLG and 8 NLU tasks, reveal that PiSSA consistently outperforms LoRA under identical experimental setups. On the GSM8K benchmark, Gemma-7B fine-tuned with PiSSA achieves an accuracy of 77.7\%, surpassing LoRA's 74.53\% by 3.25\%. Due to the same architecture, PiSSA is also compatible with quantization to further reduce the memory requirement of fine-tuning. Compared to QLoRA, QPiSSA (PiSSA with 4-bit quantization) exhibits smaller quantization errors in the initial stages. Fine-tuning LLaMA-3-70B on GSM8K, QPiSSA attains an accuracy of 86.05\%, exceeding the performances of QLoRA at 81.73\%. Leveraging a fast SVD technique, PiSSA can be initialized in only a few seconds, presenting a negligible cost for transitioning from LoRA to PiSSA.
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
[ "Fanxu Meng", "Zhaohui Wang", "Muhan Zhang" ]
NeurIPS.cc/2024/Conference
2404.02948
[ "https://github.com/graphpku/pissa" ]
https://huggingface.co/papers/2404.02948
1
2
0
3
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[ "fxmeng/CodeFeedback-Python105K", "fxmeng/MetaMath-GSM240K", "fxmeng/MetaMath-MATH155K" ]
[ "Justinrune/LLaMA-Factory", "smarttang/blingsec" ]
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[ "fxmeng/CodeFeedback-Python105K", "fxmeng/MetaMath-GSM240K", "fxmeng/MetaMath-MATH155K" ]
[ "Justinrune/LLaMA-Factory", "smarttang/blingsec" ]
1
oral
null
https://openreview.net/forum?id=6YKMBUiIsG
@inproceedings{ hu2024inevitable, title={Inevitable Trade-off between Watermark Strength and Speculative Sampling Efficiency for Language Models}, author={Zhengmian Hu and Heng Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6YKMBUiIsG} }
Large language models are probabilistic models, and the process of generating content is essentially sampling from the output distribution of the language model. Existing watermarking techniques inject watermarks into the generated content without altering the output quality. On the other hand, existing acceleration techniques, specifically speculative sampling, leverage a draft model to speed up the sampling process while preserving the output distribution. However, there is no known method to simultaneously accelerate the sampling process and inject watermarks into the generated content. In this paper, we investigate this direction and find that the integration of watermarking and acceleration is non-trivial. We prove a no-go theorem, which states that it is impossible to simultaneously maintain the highest watermark strength and the highest sampling efficiency. Furthermore, we propose two methods that maintain either the sampling efficiency or the watermark strength, but not both. Our work provides a rigorous theoretical foundation for understanding the inherent trade-off between watermark strength and sampling efficiency in accelerating the generation of watermarked tokens for large language models. We also conduct numerical experiments to validate our theoretical findings and demonstrate the effectiveness of the proposed methods.
Inevitable Trade-off between Watermark Strength and Speculative Sampling Efficiency for Language Models
[ "Zhengmian Hu", "Heng Huang" ]
NeurIPS.cc/2024/Conference
2410.20418
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6YIpvnkjUK
@inproceedings{ salgia2024the, title={The Sample-Communication Complexity Trade-off in Federated Q-Learning}, author={Sudeep Salgia and Yuejie Chi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6YIpvnkjUK} }
We consider the problem of Federated Q-learning, where $M$ agents aim to collaboratively learn the optimal Q-function of an unknown infinite horizon Markov Decision Process with finite state and action spaces. We investigate the trade-off between sample and communication complexity for the widely used class of intermittent communication algorithms. We first establish the converse result, where we show that any Federated Q-learning that offers a linear speedup with respect to number of agents in sample complexity needs to incur a communication cost of at least $\Omega(\frac{1}{1-\gamma})$, where $\gamma$ is the discount factor. We also propose a new Federated Q-learning algorithm, called Fed-DVR-Q, which is the first Federated Q-learning algorithm to simultaneously achieve order-optimal sample and communication complexities. Thus, together these results provide a complete characterization of the sample-communication complexity trade-off in Federated Q-learning.
The Sample-Communication Complexity Trade-off in Federated Q-Learning
[ "Sudeep Salgia", "Yuejie Chi" ]
NeurIPS.cc/2024/Conference
2408.16981
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=6W3LbkKriL
@inproceedings{ jin2024lighting, title={Lighting Every Darkness with 3{DGS}: Fast Training and Real-Time Rendering for {HDR} View Synthesis}, author={Xin Jin and Pengyi Jiao and Zheng-Peng Duan and Xingchao Yang and Chongyi Li and Chun-Le Guo and Bo Ren}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6W3LbkKriL} }
Volumetric rendering-based methods, like NeRF, excel in HDR view synthesis from RAW images, especially for nighttime scenes. They suffer from long training times and cannot perform real-time rendering due to dense sampling requirements. The advent of 3D Gaussian Splatting (3DGS) enables real-time rendering and faster training. However, implementing RAW image-based view synthesis directly using 3DGS is challenging due to its inherent drawbacks: 1) in nighttime scenes, extremely low SNR leads to poor structure-from-motion (SfM) estimation in dis- tant views; 2) the limited representation capacity of the spherical harmonics (SH) function is unsuitable for RAW linear color space; and 3) inaccurate scene structure hampers downstream tasks such as refocusing. To address these issues, we propose LE3D (Lighting Every darkness with 3DGS). Our method proposes Cone Scatter Initialization to enrich the estimation of SfM and replaces SH with a Color MLP to represent the RAW linear color space. Additionally, we introduce depth distortion and near-far regularizations to improve the accuracy of scene structure for down- stream tasks. These designs enable LE3D to perform real-time novel view synthesis, HDR rendering, refocusing, and tone-mapping changes. Compared to previous vol- umetric rendering-based methods, LE3D reduces training time to 1% and improves rendering speed by up to 4,000 times for 2K resolution images in terms of FPS. Code and viewer can be found in https://srameo.github.io/projects/le3d.
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis
[ "Xin Jin", "Pengyi Jiao", "Zheng-Peng Duan", "Xingchao Yang", "Chongyi Li", "Chun-Le Guo", "Bo Ren" ]
NeurIPS.cc/2024/Conference
2406.06216
[ "https://github.com/srameo/le3d" ]
https://huggingface.co/papers/2406.06216
2
19
4
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=6VVgAgVfxW
@inproceedings{ d{\"o}nmez2024teamfictitious, title={Team-Fictitious Play for Reaching Team-Nash Equilibrium in Multi-team Games}, author={Ahmed Said D{\"o}nmez and Y{\"u}ksel Arslanta{\c{s}} and Muhammed O. Sayin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6VVgAgVfxW} }
Multi-team games, prevalent in robotics and resource management, involve team members striving for a joint best response against other teams. Team-Nash equilibrium (TNE) predicts the outcomes of such coordinated interactions. However, can teams of self-interested agents reach TNE? We introduce Team-Fictitious Play (Team-FP), a new variant of fictitious play where agents respond to the last actions of team members and the beliefs formed about other teams with some inertia in action updates. This design is essential in team coordination beyond the classical fictitious play dynamics. We focus on zero-sum potential team games (ZSPTGs) where teams can interact pairwise while the team members do not necessarily have identical payoffs. We show that Team-FP reaches near TNE in ZSPTGs with a quantifiable error bound. We extend Team-FP dynamics to multi-team Markov games for model-based and model-free cases. The convergence analysis tackles the challenge of non-stationarity induced by evolving opponent strategies based on the optimal coupling lemma and stochastic differential inclusion approximation methods. Our work strengthens the foundation for using TNE to predict the behavior of decentralized teams and offers a practical rule for team learning in multi-team environments. We provide extensive simulations of Team-FP dynamics and compare its performance with other widely studied dynamics such as smooth fictitious play and multiplicative weights update. We further explore how different parameters impact the speed of convergence.
Team-Fictitious Play for Reaching Team-Nash Equilibrium in Multi-team Games
[ "Ahmed Said Dönmez", "Yüksel Arslantaş", "Muhammed O. Sayin" ]
NeurIPS.cc/2024/Conference
2402.02147
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6U8iV9HVpS
@inproceedings{ qi2024robust, title={Robust Neural Contextual Bandit against Adversarial Corruptions}, author={Yunzhe Qi and Yikun Ban and Arindam Banerjee and Jingrui He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6U8iV9HVpS} }
Contextual bandit algorithms aim to identify the optimal arm with the highest reward among a set of candidates, based on the accessible contextual information. Among these algorithms, neural contextual bandit methods have shown generally superior performances against linear and kernel ones, due to the representation power of neural networks. However, similar to other neural network applications, neural bandit algorithms can be vulnerable to adversarial attacks or corruptions on the received labels (i.e., arm rewards), which can lead to unexpected performance degradation without proper treatments. As a result, it is necessary to improve the robustness of neural bandit models against potential reward corruptions. In this work, we propose a novel neural contextual bandit algorithm named R-NeuralUCB, which utilizes a novel context-aware Gradient Descent (GD) training strategy to improve the robustness against adversarial reward corruptions. Under over-parameterized neural network settings, we provide regret analysis for R-NeuralUCB to quantify reward corruption impacts, without the commonly adopted arm separateness assumption in existing neural bandit works. We also conduct experiments against baselines on real data sets under different scenarios, in order to demonstrate the effectiveness of our proposed R-NeuralUCB.
Robust Neural Contextual Bandit against Adversarial Corruptions
[ "Yunzhe Qi", "Yikun Ban", "Arindam Banerjee", "Jingrui He" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6U5fCHIWOC
@inproceedings{ andreeva2024topological, title={Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms}, author={Rayna Andreeva and Benjamin Dupuis and Rik Sarkar and Tolga Birdal and Umut Simsekli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6U5fCHIWOC} }
We present a novel set of rigorous and computationally efficient topology-based complexity notions that exhibit a strong correlation with the generalization gap in modern deep neural networks (DNNs). DNNs show remarkable generalization properties, yet the source of these capabilities remains elusive, defying the established statistical learning theory. Recent studies have revealed that properties of training trajectories can be indicative of generalization. Building on this insight, state-of-the-art methods have leveraged the topology of these trajectories, particularly their fractal dimension, to quantify generalization. Most existing works compute this quantity by assuming continuous- or infinite-time training dynamics, complicating the development of practical estimators capable of accurately predicting generalization without access to test data. In this paper, we respect the discrete-time nature of training trajectories and investigate the underlying topological quantities that can be amenable to topological data analysis tools. This leads to a new family of reliable topological complexity measures that provably bound the generalization error, eliminating the need for restrictive geometric assumptions. These measures are computationally friendly, enabling us to propose simple yet effective algorithms for computing generalization indices. Moreover, our flexible framework can be extended to different domains, tasks, and architectures. Our experimental results demonstrate that our new complexity measures exhibit a strong correlation with generalization error in industry-standard architectures such as transformers and deep graph networks. Our approach consistently outperforms existing topological bounds across a wide range of datasets, models, and optimizers, highlighting the practical relevance and effectiveness of our complexity measures.
Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms
[ "Rayna Andreeva", "Benjamin Dupuis", "Rik Sarkar", "Tolga Birdal", "Umut Simsekli" ]
NeurIPS.cc/2024/Conference
2407.08723
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6SSzMq3WTn
@inproceedings{ lee2024improved, title={Improved Regret of Linear Ensemble Sampling}, author={Harin Lee and Min-hwan Oh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6SSzMq3WTn} }
In this work, we close the fundamental gap of theory and practice by providing an improved regret bound for linear ensemble sampling. We prove that with an ensemble size logarithmic in $T$, linear ensemble sampling can achieve a frequentist regret bound of $\tilde{\mathcal{O}}(d^{3/2}\sqrt{T})$, matching state-of-the-art results for randomized linear bandit algorithms, where $d$ and $T$ are the dimension of the parameter and the time horizon respectively. Our approach introduces a general regret analysis framework for linear bandit algorithms. Additionally, we reveal a significant relationship between linear ensemble sampling and Linear Perturbed-History Exploration (LinPHE), showing that LinPHE is a special case of linear ensemble sampling when the ensemble size equals $T$. This insight allows us to derive a new regret bound of $\tilde{\mathcal{O}}(d^{3/2}\sqrt{T})$ for LinPHE, independent of the number of arms. Our contributions advance the theoretical foundation of ensemble sampling, bringing its regret bounds in line with the best known bounds for other randomized exploration algorithms.
Improved Regret of Linear Ensemble Sampling
[ "Harin Lee", "Min-hwan Oh" ]
NeurIPS.cc/2024/Conference
2411.03932
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6SRPizFuaE
@inproceedings{ wang2024taming, title={Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains}, author={Lei Wang and Jieming Bian and Letian Zhang and Chen Chen and Jie Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6SRPizFuaE} }
Federated learning (FL) allows collaborative machine learning training without sharing private data. While most FL methods assume identical data domains across clients, real-world scenarios often involve heterogeneous data domains. Federated Prototype Learning (FedPL) addresses this issue, using mean feature vectors as prototypes to enhance model generalization. However, existing FedPL methods create the same number of prototypes for each client, leading to cross-domain performance gaps and disparities for clients with varied data distributions. To mitigate cross-domain feature representation variance, we introduce FedPLVM, which establishes variance-aware dual-level prototypes clustering and employs a novel $\alpha$-sparsity prototype loss. The dual-level prototypes clustering strategy creates local clustered prototypes based on private data features, then performs global prototypes clustering to reduce communication complexity and preserve local data privacy. The $\alpha$-sparsity prototype loss aligns samples from underrepresented domains, enhancing intra-class similarity and reducing inter-class similarity. Evaluations on Digit-5, Office-10, and DomainNet datasets demonstrate our method's superiority over existing approaches.
Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains
[ "Lei Wang", "Jieming Bian", "Letian Zhang", "Chen Chen", "Jie Xu" ]
NeurIPS.cc/2024/Conference
2403.09048
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster