bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=NbFOrcwqbR | @inproceedings{
tu2024taming,
title={Taming Generative Diffusion Prior for Universal Blind Image Restoration},
author={Siwei Tu and Weidong Yang and Ben Fei},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NbFOrcwqbR}
} | Diffusion models have been widely utilized for image restoration. However, previous blind image restoration methods still need to assume the type of degradation model while leaving the parameters to be optimized, limiting their real-world applications. Therefore, we aim to tame generative diffusion prior for universal blind image restoration dubbed BIR-D, which utilizes an optimizable convolutional kernel to simulate the degradation model and dynamically update the parameters of the kernel in the diffusion steps, enabling it to achieve blind image restoration results even in various complex situations. Besides, based on mathematical reasoning, we have provided an empirical formula for the chosen of adaptive guidance scale, eliminating the need for a grid search for the optimal parameter. Experimentally, Our BIR-D has demonstrated superior practicality and versatility than off-the-shelf unsupervised methods across various tasks both on real-world and synthetic datasets, qualitatively and quantitatively. BIR-D is able to fulfill multi-guidance blind image restoration. Moreover, BIR-D can also restore images that undergo multiple and complicated degradations, demonstrating the practical applications. The code is available at https://github.com/Tusiwei/BIR-D. | Taming Generative Diffusion Prior for Universal Blind Image Restoration | [
"Siwei Tu",
"Weidong Yang",
"Ben Fei"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Nb5xlelV0C | @inproceedings{
he2024aid,
title={{AID}: Attention Interpolation of Text-to-Image Diffusion},
author={Qiyuan He and Jinghao Wang and Ziwei Liu and Angela Yao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Nb5xlelV0C}
} | Conditional diffusion models can create unseen images in various settings, aiding image interpolation. Interpolation in latent spaces is well-studied, but interpolation with specific conditions like text or image is less understood. Common approaches interpolate linearly in the conditioning space but tend to result in inconsistent images with poor fidelity. This work introduces a novel training-free technique named \textbf{Attention Interpolation via Diffusion (AID)}. AID has two key contributions: \textbf{1)} a fused inner/outer interpolated attention layer to boost image consistency and fidelity; and \textbf{2)} selection of interpolation coefficients via a beta distribution to increase smoothness. Additionally, we present an AID variant called \textbf{Prompt-guided Attention Interpolation via Diffusion (PAID)}, which \textbf{3)} treats interpolation as a condition-dependent generative process. Experiments demonstrate that our method achieves greater consistency, smoothness, and efficiency in condition-based interpolation, aligning closely with human preferences. Furthermore, PAID offers substantial benefits for compositional generation, controlled image editing, image morphing and image-controlled generation, all while remaining training-free. | AID: Attention Interpolation of Text-to-Image Diffusion | [
"Qiyuan He",
"Jinghao Wang",
"Ziwei Liu",
"Angela Yao"
] | NeurIPS.cc/2024/Conference | 2403.17924 | [
"https://github.com/qy-h00/attention-interpolation-diffusion"
] | https://huggingface.co/papers/2403.17924 | 1 | 0 | 0 | 4 | [] | [] | [
"king159/PAID",
"qyoo/AID-v2"
] | [] | [] | [
"king159/PAID",
"qyoo/AID-v2"
] | 1 | poster |
null | https://openreview.net/forum?id=NadTwTODgC | @inproceedings{
alonso2024diffusion,
title={Diffusion for World Modeling: Visual Details Matter in Atari},
author={Eloi Alonso and Adam Jelley and Vincent Micheli and Anssi Kanervisto and Amos Storkey and Tim Pearce and Fran{\c{c}}ois Fleuret},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NadTwTODgC}
} | World models constitute a promising approach for training reinforcement learning agents in a safe and sample-efficient manner. Recent world models predominantly operate on sequences of discrete latent variables to model environment dynamics. However, this compression into a compact discrete representation may ignore visual details that are important for reinforcement learning. Concurrently, diffusion models have become a dominant approach for image generation, challenging well-established methods modeling discrete latents. Motivated by this paradigm shift, we introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams), a reinforcement learning agent trained in a diffusion world model. We analyze the key design choices that are required to make diffusion suitable for world modeling, and demonstrate how improved visual details can lead to improved agent performance. DIAMOND achieves a mean human normalized score of 1.46 on the competitive Atari 100k benchmark; a new best for agents trained entirely within a world model. We further demonstrate that DIAMOND's diffusion world model can stand alone as an interactive neural game engine by training on static *Counter-Strike: Global Offensive* gameplay. To foster future research on diffusion for world modeling, we release our code, agents, videos and playable world models at https://diamond-wm.github.io. | Diffusion for World Modeling: Visual Details Matter in Atari | [
"Eloi Alonso",
"Adam Jelley",
"Vincent Micheli",
"Anssi Kanervisto",
"Amos Storkey",
"Tim Pearce",
"François Fleuret"
] | NeurIPS.cc/2024/Conference | 2405.12399 | [
"https://github.com/eloialonso/diamond"
] | https://huggingface.co/papers/2405.12399 | 5 | 27 | 3 | 7 | [
"eloialonso/diamond"
] | [
"TeaPearce/CounterStrike_Deathmatch"
] | [] | [
"eloialonso/diamond"
] | [
"TeaPearce/CounterStrike_Deathmatch"
] | [] | 1 | oral |
null | https://openreview.net/forum?id=NaCXcUKihH | @inproceedings{
cagnetta2024towards,
title={Towards a theory of how the structure of language is acquired by deep neural networks},
author={Francesco Cagnetta and Matthieu Wyart},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NaCXcUKihH}
} | How much data is required to learn the structure of a language via next-token prediction? We study this question for synthetic datasets generated via a Probabilistic Context-Free Grammar (PCFG)---a hierarchical generative model that captures the tree-like structure of natural languages. We determine token-token correlations analytically in our model and show that they can be used to build a representation of the grammar's hidden variables, the longer the range the deeper the variable. In addition, a finite training set limits the resolution of correlations to an effective range, whose size grows with that of the training set. As a result, a Language Model trained with increasingly many examples can build a deeper representation of the grammar's structure, thus reaching good performance despite the high dimensionality of the problem. We conjecture that the relationship between training set size and effective range of correlations holds beyond our synthetic datasets, and we test it in a collection of lines from Shakespeare's plays. In particular, we show that reducing the input size leads to saturation of the test loss decay at a characteristic training set size that can be predicted in our framework. | Towards a theory of how the structure of language is acquired by deep neural networks | [
"Francesco Cagnetta",
"Matthieu Wyart"
] | NeurIPS.cc/2024/Conference | 2406.00048 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NWctqX77b3 | @inproceedings{
luo2024melloc,
title={Me{LL}oC: Lossless Compression with High-order Mechanism Learning},
author={Xinyue Luo and Jin Cheng and Yu Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NWctqX77b3}
} | Lossless compression of large-scale scientific floating-point data is critical yet challenging due to the presence of high-order information and noise that arises from model truncation and discretization errors. Existing entropy coding techniques fail to effectively leverage the mechanisms underlying the data generation process. This paper introduces MeLLoC(Mechanism Learning for Lossless Compression), a novel approach that combines high-order mechanism learning with classical encoding to enhance lossless compression for scientific data. The key idea is to treat the data as discrete samples from an underlying physical field described by differential equations and solve an inverse problem to identify the governing equation coefficients exhibiting more compressible numeric representations. Periodic extension techniques are employed to accelerate the decompression. Through extensive experiments on various scientific datasets, MeLLoC consistently outperforms state-of-the-art lossless compressors while offering compelling trade-offs between compression ratios and computational costs. This work opens up new avenues for exploiting domain knowledge and high-order information to improve data compression in scientific computing. | MeLLoC: Lossless Compression with High-order Mechanism Learning | [
"Xinyue Luo",
"Jin Cheng",
"Yu Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NVl4SAmz5c | @inproceedings{
kalra2024why,
title={Why Warmup the Learning Rate? Underlying Mechanisms and Improvements},
author={Dayal Singh Kalra and Maissam Barkeshli},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NVl4SAmz5c}
} | In modern deep learning, it is common to warm up the learning rate $\eta$, often by a linear schedule between $\eta_{\text{init}} = 0$ and a predetermined target $\eta_{\text{trgt}}$. In this paper, we show through systematic experiments with SGD and Adam that the overwhelming benefit of warmup arises from allowing the network to tolerate larger $\eta_{\text{trgt}}$ by forcing the network to more well-conditioned areas of the loss landscape. The ability to handle larger target learning rates in turn makes hyperparameter tuning more robust while improving the final performance of the network. We uncover different regimes of operation during the warmup period, depending on whether the network training starts off in a progressive sharpening or sharpness reduction phase, which in turn depends on the initialization and parameterization. Using these insights, we show how $\eta_{\text{init}}$ can be properly chosen by utilizing the loss catapult mechanism, which saves on the number of warmup steps, in some cases completely eliminating the need for warmup. We also suggest an initialization for the variance in Adam, which provides benefits similar to warmup. | Why Warmup the Learning Rate? Underlying Mechanisms and Improvements | [
"Dayal Singh Kalra",
"Maissam Barkeshli"
] | NeurIPS.cc/2024/Conference | 2406.09405 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NVDYgEFXCy | @inproceedings{
jiang2024adaptive,
title={Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization},
author={Ruichen Jiang and Ali Kavis and Qiujiang Jin and sujay sanghavi and Aryan Mokhtari},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NVDYgEFXCy}
} | We propose adaptive, line-search-free second-order methods with optimal rate of convergence for solving convex-concave min-max problems. By means of an adaptive step size, our algorithms feature a simple update rule that requires solving only one linear system per iteration, eliminating the need for line-search or backtracking mechanisms. Specifically, we base our algorithms on the optimistic method and appropriately combine it with second-order information. Moreover, distinct from common adaptive schemes, we define the step size recursively as a function of the gradient norm and the prediction error in the optimistic update. We first analyze a variant where the step size requires knowledge of the Lipschitz constant of the Hessian. Under the additional assumption of Lipschitz continuous gradients, we further design a parameter-free version by tracking the Hessian Lipschitz constant locally and ensuring the iterates remain bounded. We also evaluate the practical performance of our algorithm by comparing it to existing second-order algorithms for minimax optimization. | Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization | [
"Ruichen Jiang",
"Ali Kavis",
"Qiujiang Jin",
"sujay sanghavi",
"Aryan Mokhtari"
] | NeurIPS.cc/2024/Conference | 2406.02016 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NU54MoKWlA | @inproceedings{
yoo2024neural,
title={Neural Pose Representation Learning for Generating and Transferring Non-Rigid Object Poses},
author={Seungwoo Yoo and Juil Koo and Kyeongmin Yeo and Minhyuk Sung},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NU54MoKWlA}
} | We propose a novel method for learning representations of poses for 3D deformable objects, which specializes in 1) disentangling pose information from the object's identity, 2) facilitating the learning of pose variations, and 3) transferring pose information to other object identities. Based on these properties, our method enables the generation of 3D deformable objects with diversity in both identities and poses, using variations of a single object. It does not require explicit shape parameterization such as skeletons or joints, point-level or shape-level correspondence supervision, or variations of the target object for pose transfer.
To achieve pose disentanglement, compactness for generative models, and transferability, we first design the pose extractor to represent the pose as a keypoint-based hybrid representation and the pose applier to learn an implicit deformation field. To better distill pose information from the object's geometry, we propose the implicit pose applier to output an intrinsic mesh property, the face Jacobian. Once the extracted pose information is transferred to the target object, the pose applier is fine-tuned in a self-supervised manner to better describe the target object's shapes with pose variations. The extracted poses are also used to train a cascaded diffusion model to enable the generation of novel poses.
Our experiments with the DeformThings4D and Human datasets demonstrate state-of-the-art performance in pose transfer and the ability to generate diverse deformed shapes with various objects and poses. | Neural Pose Representation Learning for Generating and Transferring Non-Rigid Object Poses | [
"Seungwoo Yoo",
"Juil Koo",
"Kyeongmin Yeo",
"Minhyuk Sung"
] | NeurIPS.cc/2024/Conference | 2406.09728 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NU3tE3lIqf | @inproceedings{
kulhanek2024wildgaussians,
title={WildGaussians: 3D Gaussian Splatting In the Wild},
author={Jonas Kulhanek and Songyou Peng and Zuzana Kukelova and Marc Pollefeys and Torsten Sattler},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NU3tE3lIqf}
} | While the field of 3D scene reconstruction is dominated by NeRFs due to their photorealistic quality, 3D Gaussian Splatting (3DGS) has recently emerged, offering similar quality with real-time rendering speeds. However, both methods primarily excel with well-controlled 3D scenes, while in-the-wild data - characterized by occlusions, dynamic objects, and varying illumination - remains challenging. NeRFs can adapt to such conditions easily through per-image embedding vectors, but 3DGS struggles due to its explicit representation and lack of shared parameters. To address this, we introduce WildGaussians, a novel approach to handle occlusions and appearance changes with 3DGS. By leveraging robust DINO features and integrating an appearance modeling module within 3DGS, our method achieves state-of-the-art results. We demonstrate that WildGaussians matches the real-time rendering speed of 3DGS while surpassing both 3DGS and NeRF baselines in handling in-the-wild data, all within a simple architectural framework. | WildGaussians: 3D Gaussian Splatting In the Wild | [
"Jonas Kulhanek",
"Songyou Peng",
"Zuzana Kukelova",
"Marc Pollefeys",
"Torsten Sattler"
] | NeurIPS.cc/2024/Conference | 2407.08447 | [
""
] | https://huggingface.co/papers/2407.08447 | 3 | 8 | 2 | 5 | [
"jkulhanek/wild-gaussians"
] | [
"jkulhanek/nerfonthego-undistorted"
] | [] | [
"jkulhanek/wild-gaussians"
] | [
"jkulhanek/nerfonthego-undistorted"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=NTkYSWnVjl | @inproceedings{
song2024amnesia,
title={Amnesia as a Catalyst for Enhancing Black Box Pixel Attacks in Image Classification and Object Detection},
author={Dongsu Song and Daehwa Ko and Jay Hoon Jung},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NTkYSWnVjl}
} | It is well known that query-based attacks tend to have relatively higher success
rates in adversarial black-box attacks. While research on black-box attacks is actively
being conducted, relatively few studies have focused on pixel attacks that
target only a limited number of pixels. In image classification, query-based pixel
attacks often rely on patches, which heavily depend on randomness and neglect
the fact that scattered pixels are more suitable for adversarial attacks. Moreover, to
the best of our knowledge, query-based pixel attacks have not been explored in the
field of object detection. To address these issues, we propose a novel pixel-based
black-box attack called Remember and Forget Pixel Attack using Reinforcement
Learning(RFPAR), consisting of two main components: the Remember and Forget
processes. RFPAR mitigates randomness and avoids patch dependency by
leveraging rewards generated through a one-step RL algorithm to perturb pixels.
RFPAR effectively creates perturbed images that minimize the confidence scores
while adhering to limited pixel constraints. Furthermore, we advance our proposed
attack beyond image classification to object detection, where RFPAR reduces
the confidence scores of detected objects to avoid detection. Experiments
on the ImageNet-1K dataset for classification show that RFPAR outperformed
state-of-the-art query-based pixel attacks. For object detection, using the MSCOCO
dataset with YOLOv8 and DDQ, RFPAR demonstrates comparable mAP
reduction to state-of-the-art query-based attack while requiring fewer query. Further
experiments on the Argoverse dataset using YOLOv8 confirm that RFPAR
effectively removed objects on a larger scale dataset. Our code is available at
https://github.com/KAU-QuantumAILab/RFPAR. | Amnesia as a Catalyst for Enhancing Black Box Pixel Attacks in Image Classification and Object Detection | [
"Dongsu Song",
"Daehwa Ko",
"Jay Hoon Jung"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NTWXVvIXJM | @inproceedings{
chuang2024metadiffub,
title={Meta-Diffu\$B\$: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration},
author={Yunyen Chuang and Hung-Min Hsu and Kevin Lin and Chen-Sheng Gu and Ling Zhen Li and Ray-I Chang and Hung-yi Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NTWXVvIXJM}
} | The diffusion model, a new generative modeling paradigm, has achieved significant success in generating images, audio, video, and text. It has been adapted for sequence-to-sequence text generation (Seq2Seq) through DiffuSeq, termed the S2S-Diffusion model. Existing S2S-Diffusion models predominantly rely on fixed or hand-crafted rules to schedule noise during the diffusion and denoising processes. However, these models are limited by non-contextualized noise, which fails to fully consider the characteristics of Seq2Seq tasks. In this paper, we propose the Meta-Diffu$B$ framework—a novel scheduler-exploiter S2S-Diffusion paradigm designed to overcome the limitations of existing S2S-Diffusion models. We employ Meta-Exploration to train an additional scheduler model dedicated to scheduling contextualized noise for each sentence. Our exploiter model, an S2S-Diffusion model, leverages the noise scheduled by our scheduler model for updating and generation. Meta-Diffu$B$ achieves state-of-the-art performance compared to previous S2S-Diffusion models and fine-tuned pre-trained language models (PLMs) across four Seq2Seq benchmark datasets. We further investigate and visualize the impact of Meta-Diffu$B$'s noise scheduling on the generation of sentences with varying difficulties. Additionally, our scheduler model can function as a "plug-and-play" model to enhance DiffuSeq without the need for fine-tuning during the inference stage. | Meta-DiffuB: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration | [
"Yunyen Chuang",
"Hung-Min Hsu",
"Kevin Lin",
"Chen-Sheng Gu",
"Ling Zhen Li",
"Ray-I Chang",
"Hung-yi Lee"
] | NeurIPS.cc/2024/Conference | 2410.13201 | [
"https://github.com/meta-diffub/meta-diffub"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NT8Z5NjwxF | @inproceedings{
wan2024dualdiffusion,
title={Dual-Diffusion for Binocular 3D Human Pose Estimation},
author={Xiaoyue Wan and Zhuo Chen and Bingzhi Duan and Xu Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NT8Z5NjwxF}
} | Binocular 3D human pose estimation (HPE), reconstructing a 3D pose from 2D poses of two views, offers practical advantages by combining multiview geometry with the convenience of a monocular setup. However, compared to a multiview setup, the reduction in the number of cameras increases uncertainty in 3D reconstruction. To address this issue, we leverage the diffusion model, which has shown success in monocular 3D HPE by recovering 3D poses from noisy data with high uncertainty. Yet, the uncertainty distribution of initial 3D poses remains unknown. Considering that 3D errors stem from 2D errors within geometric constraints, we recognize that the uncertainties of 3D and 2D are integrated in a binocular configuration, with the initial 2D uncertainty being well-defined. Based on this insight, we propose Dual-Diffusion specifically for Binocular 3D HPE, simultaneously denoising the uncertainties in 2D and 3D, and recovering plausible and accurate results. Additionally, we introduce Z-embedding as an additional condition for denoising and implement baseline-width-related pose normalization to enhance the model flexibility for various baseline settings. This is crucial as 3D error influence factors encompass depth and baseline width. Extensive experiments validate the effectiveness of our Dual-Diffusion in 2D refinement and 3D estimation. The code and models are available at https://github.com/sherrywan/Dual-Diffusion. | Dual-Diffusion for Binocular 3D Human Pose Estimation | [
"Xiaoyue Wan",
"Zhuo Chen",
"Bingzhi Duan",
"Xu Zhao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NQCkNM6TES | @inproceedings{
lou2024harmonizing,
title={Harmonizing Stochasticity and Determinism: Scene-responsive Diverse Human Motion Prediction},
author={Zhenyu Lou and Qiongjie Cui and Tuo Wang and Zhenbo Song and Luoming Zhang and Cheng Cheng and Haofan Wang and Xu Tang and Huaxia Li and Hong Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NQCkNM6TES}
} | Diverse human motion prediction (HMP) is a fundamental application in computer vision that has recently attracted considerable interest. Prior methods primarily focus on the stochastic nature of human motion, while neglecting the specific impact of external environment, leading to the pronounced artifacts in prediction when applied to real-world scenarios. To fill this gap, this work introduces a novel task: predicting diverse human motion within real-world 3D scenes. In contrast to prior works, it requires harmonizing the deterministic constraints imposed by the surrounding 3D scenes with the stochastic aspect of human motion. For this purpose, we propose DiMoP3D, a diverse motion prediction framework with 3D scene awareness, which leverages the 3D point cloud and observed sequence to generate diverse and high-fidelity predictions. DiMoP3D is able to comprehend the 3D scene, and determines the probable target objects and their desired interactive pose based on the historical motion. Then, it plans the obstacle-free trajectory towards these interested objects, and generates diverse and physically-consistent future motions. On top of that, DiMoP3D identifies deterministic factors in the scene and integrates them into the stochastic modeling, making the diverse HMP in realistic scenes become a controllable stochastic generation process. On two real-captured benchmarks, DiMoP3D has demonstrated significant improvements over state-of-the-art methods, showcasing its effectiveness in generating diverse and physically-consistent motion predictions within real-world 3D environments. | Harmonizing Stochasticity and Determinism: Scene-responsive Diverse Human Motion Prediction | [
"Zhenyu Lou",
"Qiongjie Cui",
"Tuo Wang",
"Zhenbo Song",
"Luoming Zhang",
"Cheng Cheng",
"Haofan Wang",
"Xu Tang",
"Huaxia Li",
"Hong Zhou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NQB9myZksw | @inproceedings{
perugachi-diaz2024robustly,
title={Robustly overfitting latents for flexible neural image compression},
author={Yura Perugachi-Diaz and Arwin Gansekoele and Sandjai Bhulai},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NQB9myZksw}
} | Neural image compression has made a great deal of progress. State-of-the-art models are based on variational autoencoders and are outperforming classical models. Neural compression models learn to encode an image into a quantized latent representation that can be efficiently sent to the decoder, which decodes the quantized latent into a reconstructed image. While these models have proven successful in practice, they lead to sub-optimal results due to imperfect optimization and limitations in the encoder and decoder capacity. Recent work shows how to use stochastic Gumbel annealing (SGA) to refine the latents of pre-trained neural image compression models.
We extend this idea by introducing SGA+, which contains three different methods that build upon SGA.
We show how our method improves the overall compression performance in terms of the R-D trade-off, compared to its predecessors. Additionally, we show how refinement of the latents with our best-performing method improves the compression performance on both the Tecnick and CLIC dataset. Our method is deployed for a pre-trained hyperprior and for a more flexible model.
Further, we give a detailed analysis of our proposed methods and show that they are less sensitive to hyperparameter choices. Finally, we show how each method can be extended to three- instead of two-class rounding. | Robustly overfitting latents for flexible neural image compression | [
"Yura Perugachi-Diaz",
"Arwin Gansekoele",
"Sandjai Bhulai"
] | NeurIPS.cc/2024/Conference | 2401.17789 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NPu7Cdk2f9 | @inproceedings{
kang2024adaptive,
title={Adaptive Depth Networks with Skippable Sub-Paths},
author={Woochul Kang and Hyungseop Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NPu7Cdk2f9}
} | Predictable adaptation of network depths can be an effective way to control inference latency and meet the resource condition of various devices. However, previous adaptive depth networks do not provide general principles and a formal explanation on why and which layers can be skipped, and, hence, their approaches are hard to be generalized and require long and complex training steps. In this paper, we present a practical approach to adaptive depth networks that is applicable to various networks with minimal training effort. In our approach, every hierarchical residual stage is divided into two sub-paths, and they are trained to acquire different properties through a simple self-distillation strategy. While the first sub-path is essential for hierarchical feature learning, the second one is trained to refine the learned features and minimize performance degradation if it is skipped. Unlike prior adaptive networks, our approach does not train every target sub-network in an iterative manner. At test time, however, we can connect these sub-paths in a combinatorial manner to select sub-networks of various accuracy-efficiency trade-offs from a single network. We provide a formal rationale for why the proposed training method can reduce overall prediction errors while minimizing the impact of skipping sub-paths. We demonstrate the generality and effectiveness of our approach with convolutional neural networks and transformers. | Adaptive Depth Networks with Skippable Sub-Paths | [
"Woochul Kang",
"Hyungseop Lee"
] | NeurIPS.cc/2024/Conference | 2312.16392 | [
"https://github.com/wchkang/depth"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NPKZF1WDjZ | @inproceedings{
xue2024decompose,
title={Decompose, Analyze and Rethink: Solving Intricate Problems with Human-like Reasoning Cycle},
author={Shangzi Xue and Zhenya Huang and Jiayu Liu and Xin Lin and Yuting Ning and Binbin Jin and Xin Li and Qi Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NPKZF1WDjZ}
} | In this paper, we introduce DeAR (_Decompose-Analyze-Rethink_), a framework that iteratively builds a reasoning tree to tackle intricate problems within a single large language model (LLM). Unlike approaches that extend or search for rationales, DeAR is featured by 1) adopting a tree-based question decomposition manner to plan the organization of rationales, which mimics the logical planning inherent
in human cognition; 2) globally updating the rationales at each reasoning step through natural language feedback. Specifically, the _Decompose_ stage decomposes the question into simpler sub-questions, storing them as new nodes; the _Analyze_ stage generates and self-checks rationales for sub-questions at each node evel; and the _Rethink_ stage updates parent-node rationales based on feedback from their child nodes. By generating and updating the reasoning process from a more global perspective, DeAR constructs more adaptive and accurate logical structures for complex problems, facilitating timely error correction compared to rationale-extension and search-based approaches such as Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT). We conduct extensive experiments on three reasoning benchmarks, including ScienceQA, StrategyQA, and GSM8K, which cover a variety of reasoning tasks, demonstrating that our approach significantly reduces logical errors and enhances performance across various LLMs. Furthermore, we validate that DeAR is an efficient method that achieves a superior trade-off between accuracy and reasoning time compared to ToT and GoT. | Decompose, Analyze and Rethink: Solving Intricate Problems with Human-like Reasoning Cycle | [
"Shangzi Xue",
"Zhenya Huang",
"Jiayu Liu",
"Xin Lin",
"Yuting Ning",
"Binbin Jin",
"Xin Li",
"Qi Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=NO9MSeZs6g | @inproceedings{
raman2024smoothed,
title={Smoothed Online Classification can be Harder than Batch Classification},
author={Vinod Raman and Unique Subedi and Ambuj Tewari},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NO9MSeZs6g}
} | We study online classification under smoothed adversaries. In this setting, at each time point, the adversary draws an example from a distribution that has a bounded density with respect to a fixed base measure, which is known apriori to the learner. For binary classification and scalar-valued regression, previous works [Haghtalab et al., 2020, Block et al., 2022] have shown that smoothed online learning is as easy as learning in the iid batch setting under PAC model. However, we show that smoothed online classification can be harder than the iid batch classification when the label space is unbounded. In particular, we construct a hypothesis class that is learnable in the iid batch setting under the PAC model but is not learnable under the smoothed online model. Finally, we identify a condition that ensures that the PAC learnability of a hypothesis class is sufficient for its smoothed online learnability. | Smoothed Online Classification can be Harder than Batch Classification | [
"Vinod Raman",
"Unique Subedi",
"Ambuj Tewari"
] | NeurIPS.cc/2024/Conference | 2405.15424 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NN9U0lEcAn | @inproceedings{
gong2024actfusion,
title={ActFusion: a Unified Diffusion Model for Action Segmentation and Anticipation},
author={Dayoung Gong and Suha Kwak and Minsu Cho},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NN9U0lEcAn}
} | Temporal action segmentation and long-term action anticipation are two popular vision tasks for the temporal analysis of actions in videos.
Despite apparent relevance and potential complementarity, these two problems have been investigated as separate and distinct tasks. In this work, we tackle these two problems, action segmentation, and action anticipation, jointly using a unified diffusion model dubbed ActFusion.
The key idea to unification is to train the model to effectively handle both visible and invisible parts of the sequence in an integrated manner;
the visible part is for temporal segmentation, and the invisible part is for future anticipation.
To this end, we introduce a new anticipative masking strategy during training in which a late part of the video frames is masked as invisible, and learnable tokens replace these frames to learn to predict the invisible future.
Experimental results demonstrate the bi-directional benefits between action segmentation and anticipation.
ActFusion achieves the state-of-the-art performance across the standard benchmarks of 50 Salads, Breakfast, and GTEA, outperforming task-specific models in both of the two tasks with a single unified model through joint learning. | ActFusion: a Unified Diffusion Model for Action Segmentation and Anticipation | [
"Dayoung Gong",
"Suha Kwak",
"Minsu Cho"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NLqdudgBfy | @inproceedings{
wang2024understanding,
title={Understanding the Role of Equivariance in Self-supervised Learning},
author={Yifei Wang and Kaiwen Hu and Sharut Gupta and Ziyu Ye and Yisen Wang and Stefanie Jegelka},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NLqdudgBfy}
} | Contrastive learning has been a leading paradigm for self-supervised learning, but it is widely observed that it comes at the price of sacrificing useful features (\eg colors) by being invariant to data augmentations. Given this limitation, there has been a surge of interest in equivariant self-supervised learning (E-SSL) that learns features to be augmentation-aware. However, even for the simplest rotation prediction method, there is a lack of rigorous understanding of why, when, and how E-SSL learns useful features for downstream tasks. To bridge this gap between practice and theory, we establish an information-theoretic perspective to understand the generalization ability of E-SSL. In particular, we identify a critical explaining-away effect in E-SSL that creates a synergy between the equivariant and classification tasks. This synergy effect encourages models to extract class-relevant features to improve its equivariant prediction, which, in turn, benefits downstream tasks requiring semantic features. Based on this perspective, we theoretically analyze the influence of data transformations and reveal several principles for practical designs of E-SSL. Our theory not only aligns well with existing E-SSL methods but also sheds light on new directions by exploring the benefits of model equivariance. We believe that a theoretically grounded understanding on the role of equivariance would inspire more principled and advanced designs in this field. Code is available at
https://github.com/kaotty/Understanding-ESSL. | Understanding the Role of Equivariance in Self-supervised Learning | [
"Yifei Wang",
"Kaiwen Hu",
"Sharut Gupta",
"Ziyu Ye",
"Yisen Wang",
"Stefanie Jegelka"
] | NeurIPS.cc/2024/Conference | 2411.06508 | [
"https://github.com/kaotty/understanding-essl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NLmAGkN6nn | @inproceedings{
wu2024ptqdit,
title={{PTQ}4DiT: Post-training Quantization for Diffusion Transformers},
author={Junyi Wu and Haoxuan Wang and Yuzhang Shang and Mubarak Shah and Yan Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NLmAGkN6nn}
} | The recent introduction of Diffusion Transformers (DiTs) has demonstrated exceptional capabilities in image generation by using a different backbone architecture, departing from traditional U-Nets and embracing the scalable nature of transformers. Despite their advanced capabilities, the wide deployment of DiTs, particularly for real-time applications, is currently hampered by considerable computational demands at the inference stage. Post-training Quantization (PTQ) has emerged as a fast and data-efficient solution that can significantly reduce computation and memory footprint by using low-bit weights and activations. However, its applicability to DiTs has not yet been explored and faces non-trivial difficulties due to the unique design of DiTs. In this paper, we propose PTQ4DiT, a specifically designed PTQ method for DiTs. We discover two primary quantization challenges inherent in DiTs, notably the presence of salient channels with extreme magnitudes and the temporal variability in distributions of salient activation over multiple timesteps. To tackle these challenges, we propose Channel-wise Salience Balancing (CSB) and Spearmen's $\rho$-guided Salience Calibration (SSC). CSB leverages the complementarity property of channel magnitudes to redistribute the extremes, alleviating quantization errors for both activations and weights. SSC extends this approach by dynamically adjusting the balanced salience to capture the temporal variations in activation. Additionally, to eliminate extra computational costs caused by PTQ4DiT during inference, we design an offline re-parameterization strategy for DiTs. Experiments demonstrate that our PTQ4DiT successfully quantizes DiTs to 8-bit precision (W8A8) while preserving comparable generation ability and further enables effective quantization to 4-bit weight precision (W4A8) for the first time. | PTQ4DiT: Post-training Quantization for Diffusion Transformers | [
"Junyi Wu",
"Haoxuan Wang",
"Yuzhang Shang",
"Mubarak Shah",
"Yan Yan"
] | NeurIPS.cc/2024/Conference | 2405.16005 | [
"https://github.com/adreamwu/ptq4dit"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NLUYZ4ZqNq | @inproceedings{
colombo2024saullmb,
title={Saul{LM}-54B \& Saul{LM}-141B: Scaling Up Domain Adaptation for the Legal Domain},
author={Pierre Colombo and Telmo Pires and Malik Boudiaf and Rui Filipe Coimbra Pereira de Melo and Gabriel Hautreux and Etienne Malaboeuf and Johanne Charpentier and Dominic Culver and Michael Desa},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NLUYZ4ZqNq}
} | In this paper, we introduce SaulLM-medium and SaulLM-large, two large language models (LLMs) families tailored for the legal sector. These models, which feature architectures of 54 billion and 140 billion parameters, respectively, are based on the Mixtral architecture. The development of SaulLM-54B and SaulLM-140B is guided by large-scale domain adaptation, divided into strategies: (1) the exploitation of continued pretaining involving a legal corpus that includes over $400$ billion tokens, (2) the implementation of a specialized legal instruction-following protocol, and (3) the alignment of model outputs with human preferences in legal interpretations. The integration of synthetically generated data in the second and third steps enhances the models' capabilities in interpreting and processing legal texts, effectively reaching state-of-the-art performance and outperforming all previous open-source models on LegalBench Instruct. This research thoroughly explores the trade-offs involved in domain-specific adaptation at this scale, offering insights that may inform future studies on domain adaptation using strong decoder models. Building upon SaulLM-7B, this study refines the approach to produce an LLM better equipped for legal tasks and domains. Additionally, we release base, instruct and aligned versions on top of SaulLM-medium and SaulLM-large under the MIT License to facilitate reuse and collaborative research. | SaulLM-54B SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain | [
"Pierre Colombo",
"Telmo Pires",
"Malik Boudiaf",
"Rui Filipe Coimbra Pereira de Melo",
"Gabriel Hautreux",
"Etienne Malaboeuf",
"Johanne Charpentier",
"Dominic Culver",
"Michael Desa"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NKzLqRgG45 | @inproceedings{
zhu2024parameterinverted,
title={Parameter-Inverted Image Pyramid Networks},
author={Xizhou Zhu and Xue Yang and Zhaokai Wang and Hao Li and Wenhan Dou and Junqi Ge and Lewei Lu and Yu Qiao and Jifeng Dai},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NKzLqRgG45}
} | Image pyramids are commonly used in modern computer vision tasks to obtain multi-scale features for precise understanding of images. However, image pyramids process multiple resolutions of images using the same large-scale model, which requires significant computational cost. To overcome this issue, we propose a novel network architecture known as the Parameter-Inverted Image Pyramid Networks (PIIP). Our core idea is to use models with different parameter sizes to process different resolution levels of the image pyramid, thereby balancing computational efficiency and performance. Specifically, the input to PIIP is a set of multi-scale images, where higher resolution images are processed by smaller networks. We further propose a feature interaction mechanism to allow features of different resolutions to complement each other and effectively integrate information from different spatial scales. Extensive experiments demonstrate that the PIIP achieves superior performance in tasks such as object detection, segmentation, and image classification, compared to traditional image pyramid methods and single-branch networks, while reducing computational cost. Notably, when applying our method on a large-scale vision foundation model InternViT-6B, we improve its performance by 1\%-2\% on detection and segmentation with only 40\%-60\% of the original computation. These results validate the effectiveness of the PIIP approach and provide a new technical direction for future vision computing tasks. | Parameter-Inverted Image Pyramid Networks | [
"Xizhou Zhu",
"Xue Yang",
"Zhaokai Wang",
"Hao Li",
"Wenhan Dou",
"Junqi Ge",
"Lewei Lu",
"Yu Qiao",
"Jifeng Dai"
] | NeurIPS.cc/2024/Conference | 2406.04330 | [
"https://github.com/opengvlab/piip"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=NKpPnb3YNg | @inproceedings{
zouitine2024timeconstrained,
title={Time-Constrained Robust {MDP}s},
author={Adil Zouitine and David Bertoin and Pierre Clavier and Matthieu Geist and Emmanuel Rachelson},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NKpPnb3YNg}
} | Robust reinforcement learning is essential for deploying reinforcement learning algorithms in real-world scenarios where environmental uncertainty predominates.
Traditional robust reinforcement learning often depends on rectangularity assumptions, where adverse probability measures of outcome states are assumed to be independent across different states and actions.
This assumption, rarely fulfilled in practice, leads to overly conservative policies.
To address this problem, we introduce a new time-constrained robust MDP (TC-RMDP) formulation that considers multifactorial, correlated, and time-dependent disturbances, thus more accurately reflecting real-world dynamics. This formulation goes beyond the conventional rectangularity paradigm, offering new perspectives and expanding the analytical framework for robust RL.
We propose three distinct algorithms, each using varying levels of environmental information, and evaluate them extensively on continuous control benchmarks.
Our results demonstrate that these algorithms yield an efficient tradeoff between performance and robustness, outperforming traditional deep robust RL methods in time-constrained environments while preserving robustness in classical benchmarks.
This study revisits the prevailing assumptions in robust RL and opens new avenues for developing more practical and realistic RL applications. | Time-Constrained Robust MDPs | [
"Adil Zouitine",
"David Bertoin",
"Pierre Clavier",
"Matthieu Geist",
"Emmanuel Rachelson"
] | NeurIPS.cc/2024/Conference | 2406.08395 | [
""
] | https://huggingface.co/papers/2406.08395 | 1 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=NKPXHzYusG | @inproceedings{
wu2024videollmmod,
title={Video{LLM}-MoD: Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation},
author={Shiwei Wu and Joya Chen and Kevin Qinghong Lin and Qimeng Wang and Yan Gao and Qianli Xu and Tong Xu and Yao Hu and Enhong Chen and Mike Zheng Shou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NKPXHzYusG}
} | A well-known dilemma in large vision-language models (e.g., GPT-4, LLaVA) is that while increasing the number of vision tokens generally enhances visual understanding, it also significantly raises memory and computational costs, especially in long-term, dense video frame streaming scenarios. Although learnable approaches like Q-Former and Perceiver Resampler have been developed to reduce the vision token burden, they overlook the context causally modeled by LLMs (i.e., key-value cache), potentially leading to missed visual cues when addressing user queries. In this paper, we introduce a novel approach to reduce vision compute by leveraging redundant vision tokens ``skipping layers'' rather than decreasing the number of vision tokens. Our method, VideoLLM-MoD, is inspired by mixture-of-depths LLMs and addresses the challenge of numerous vision tokens in long-term or streaming video. Specifically, for certain transformer layer, we learn to skip the computation for a high proportion (e.g., 80\%) of vision tokens, passing them directly to the next layer. This approach significantly enhances model efficiency, achieving approximately 42% time and 30% memory savings for the entire training. Moreover, our method reduces the computation in the context and avoid decreasing the vision tokens, thus preserving or even improving performance compared to the vanilla model. We conduct extensive experiments to demonstrate the effectiveness of VideoLLM-MoD, showing its state-of-the-art results on multiple benchmarks, including narration, forecasting, and summarization tasks in COIN, Ego4D, and Ego-Exo4D datasets. The code and checkpoints will be made available at github.com/showlab/VideoLLM-online. | VideoLLM-MoD: Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation | [
"Shiwei Wu",
"Joya Chen",
"Kevin Qinghong Lin",
"Qimeng Wang",
"Yan Gao",
"Qianli Xu",
"Tong Xu",
"Yao Hu",
"Enhong Chen",
"Mike Zheng Shou"
] | NeurIPS.cc/2024/Conference | 2408.16730 | [
""
] | https://huggingface.co/papers/2408.16730 | 0 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=NKGuLthW80 | @inproceedings{
hadji-kyriacou2024would,
title={Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference Heads},
author={Avelina Asada Hadji-Kyriacou and Ognjen Arandjelovic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NKGuLthW80}
} | Pre-trained Language Models (LMs) exhibit strong zero-shot and in-context learning capabilities; however, their behaviors are often difficult to control. By utilizing Reinforcement Learning from Human Feedback (RLHF), it is possible to fine-tune unsupervised LMs to follow instructions and produce outputs that reflect human preferences. Despite its benefits, RLHF has been shown to potentially harm a language model's reasoning capabilities and introduce artifacts such as hallucinations where the model may fabricate facts. To address this issue we introduce Direct Preference Heads (DPH), a fine-tuning framework that enables LMs to learn human preference signals through an auxiliary reward head without directly affecting the output distribution of the language modeling head. We perform a theoretical analysis of our objective function and find strong ties to Conservative Direct Preference Optimization (cDPO). Finally we evaluate our models on GLUE, RACE, and the GPT4All evaluation suite and demonstrate that our method produces models which achieve higher scores than those fine-tuned with Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO) alone. | Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference Heads | [
"Avelina Asada Hadji-Kyriacou",
"Ognjen Arandjelovic"
] | NeurIPS.cc/2024/Conference | 2405.20053 | [
"https://github.com/Avelina9X/direct-preference-heads"
] | https://huggingface.co/papers/2405.20053 | 1 | 2 | 0 | 2 | [
"Avelina/lovelace-medium-alpha1"
] | [] | [] | [
"Avelina/lovelace-medium-alpha1"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=NJUClFbosX | @inproceedings{
park2024discrete,
title={Discrete Dictionary-based Decomposition Layer for Structured Representation Learning},
author={Taewon Park and Hyun-Chul Kim and Minho Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NJUClFbosX}
} | Neuro-symbolic neural networks have been extensively studied to integrate symbolic operations with neural networks, thereby improving systematic generalization. Specifically, Tensor Product Representation (TPR) framework enables neural networks to perform differentiable symbolic operations by encoding the symbolic structure of data within vector spaces. However, TPR-based neural networks often struggle to decompose unseen data into structured TPR representations, undermining their symbolic operations. To address this decomposition problem, we propose a Discrete Dictionary-based Decomposition (D3) layer designed to enhance the decomposition capabilities of TPR-based models. D3 employs discrete, learnable key-value dictionaries trained to capture symbolic features essential for decomposition operations. It leverages the prior knowledge acquired during training to generate structured TPR representations by mapping input data to pre-learned symbolic features within these dictionaries. D3 is a straightforward drop-in layer that can be seamlessly integrated into any TPR-based model without modifications. Our experimental results demonstrate that D3 significantly improves the systematic generalization of various TPR-based models while requiring fewer additional parameters. Notably, D3 outperforms baseline models on the synthetic task that demands the systematic decomposition of unseen combinatorial data. | Discrete Dictionary-based Decomposition Layer for Structured Representation Learning | [
"Taewon Park",
"Hyun-Chul Kim",
"Minho Lee"
] | NeurIPS.cc/2024/Conference | 2406.06976 | [
"https://github.com/taewonpark/d3"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NIcIdhyfQX | @inproceedings{
zhang2024qdistribution,
title={Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model},
author={Jing Zhang and Linjiajie Fang and Kexin Shi and Wenjia Wang and Bingyi Jing},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NIcIdhyfQX}
} | ``Distribution shift'' is the primary obstacle to the success of offline reinforcement learning. As a learning policy may take actions beyond the knowledge of the behavior policy (referred to as Out-of-Distribution (OOD) actions), the Q-values of these OOD actions can be easily overestimated. Consequently, the learning policy becomes biasedly optimized using the incorrect recovered Q-value function. One commonly used idea to avoid the overestimation of Q-value is to make a pessimistic adjustment. Our key idea is to penalize the Q-values of OOD actions that correspond to high uncertainty. In this work, we propose Q-Distribution guided Q-learning (QDQ) which pessimistic Q-value on OOD regions based on uncertainty estimation. The uncertainty measure is based on the conditional Q-value distribution, which is learned via a high-fidelity and efficient consistency model. On the other hand, to avoid the overly conservative problem, we introduce an uncertainty-aware optimization objective to update the Q-value function. The proposed QDQ demonstrates solid theoretical guarantees for the accuracy of Q-value distribution learning and uncertainty measurement, as well as the performance of the learning policy. QDQ consistently exhibits strong performance in the D4RL benchmark and shows significant improvements for many tasks. Our code can be found at <code link>. | Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model | [
"Jing Zhang",
"Linjiajie Fang",
"Kexin Shi",
"Wenjia Wang",
"Bingyi Jing"
] | NeurIPS.cc/2024/Conference | 2410.20312 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NGuGVT7ar2 | @inproceedings{
xiao2024enhancing,
title={Enhancing {LLM} Reasoning via Vision-Augmented Prompting},
author={Ziyang Xiao and Dongxiang Zhang and Xiongwei Han and Xiaojin Fu and Wing Yin YU and Tao Zhong and Sai Wu and Yuan Jessica Wang and Jianwei Yin and Gang Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NGuGVT7ar2}
} | Verbal and visual-spatial information processing are two critical subsystems that activate different brain regions and often collaborate together for cognitive reasoning. Despite the rapid advancement of LLM-based reasoning, the mainstream frameworks, such as Chain-of-Thought (CoT) and its variants, primarily focus on the verbal dimension, resulting in limitations in tackling reasoning problems with visual and spatial clues. To bridge the gap, we propose a novel dual-modality reasoning framework called Vision-Augmented Prompting (VAP). Upon receiving a textual problem description, VAP automatically synthesizes an image from the visual and spatial clues by utilizing external drawing tools. Subsequently, VAP formulates a chain of thought in both modalities and iteratively refines the synthesized image. Finally, a conclusive reasoning scheme based on self-alignment is proposed for final result generation. Extensive experiments are conducted across four versatile tasks, including solving geometry problems, Sudoku, time series prediction, and travelling salesman problem. The results validated the superiority of VAP over existing LLMs-based reasoning frameworks. | Enhancing LLM Reasoning via Vision-Augmented Prompting | [
"Ziyang Xiao",
"Dongxiang Zhang",
"Xiongwei Han",
"Xiaojin Fu",
"Wing Yin YU",
"Tao Zhong",
"Sai Wu",
"Yuan Jessica Wang",
"Jianwei Yin",
"Gang Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=NGrINZyZKk | @inproceedings{
yang2024uniaudio,
title={UniAudio 1.5: Large Language Model-Driven Audio Codec is A Few-Shot Audio Task Learner},
author={Dongchao Yang and Haohan Guo and Yuanyuan Wang and Rongjie Huang and Xiang Li and Xu Tan and Xixin Wu and Helen M. Meng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NGrINZyZKk}
} | Large Language models (LLMs) have demonstrated supreme capabilities in textual understanding and generation, but cannot be directly applied to cross-modal tasks without fine-tuning. This paper proposes a cross-modal in-context learning approach, empowering the frozen LLMs to achieve multiple audio tasks in a few-shot style without any parameter update.
Specifically, we propose a novel LLM-driven audio codec model, LLM-Codec, which transfers the audio modality into textual space by representing audio tokens with words or sub-words from the LLM vocabulary, while maintaining high audio reconstruction quality.
The key idea is to reduce the modality heterogeneity between text and audio by compressing the audio modality into the well-trained textual space of LLMs. Thus, the audio representation can be viewed as a new \textit{foreign language}, and LLMs can learn the new \textit{foreign language} with several demonstrations. In experiments, we investigate the performance of the proposed approach across multiple audio understanding and generation tasks, \textit{e.g.} speech emotion classification, audio classification, text-to-speech generation, speech enhancement, etc. Experimental results show that LLMs equipped with the LLM-Codec, named as UniAudio 1.5, prompted by only a few examples, can perform effectively in simple scenarios, validating our cross-modal in-context learning approach.
To facilitate research on few-shot audio task learning and multi-modal LLMs, we have open-sourced the LLM-Codec model. | UniAudio 1.5: Large Language Model-Driven Audio Codec is A Few-Shot Audio Task Learner | [
"Dongchao Yang",
"Haohan Guo",
"Yuanyuan Wang",
"Rongjie Huang",
"Xiang Li",
"Xu Tan",
"Xixin Wu",
"Helen M. Meng"
] | NeurIPS.cc/2024/Conference | 2406.10056 | [
"https://github.com/yangdongchao/llm-codec"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NGpMCH5q7Y | @inproceedings{
liu2024integrating,
title={Integrating Suboptimal Human Knowledge with Hierarchical Reinforcement Learning for Large-Scale Multiagent Systems},
author={Dingbang Liu and Shohei Kato and Wen Gu and Fenghui Ren and Jun Yan and Guoxin Su},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NGpMCH5q7Y}
} | Due to the exponential growth of agent interactions and the curse of dimensionality, learning efficient coordination from scratch is inherently challenging in large-scale multi-agent systems. While agents' learning is data-driven, sampling from millions of steps, human learning processes are quite different. Inspired by the concept of Human-on-the-Loop and the daily human hierarchical control, we propose a novel knowledge-guided multi-agent reinforcement learning framework (hhk-MARL), which combines human abstract knowledge with hierarchical reinforcement learning to address the learning difficulties among a large number of agents. In this work, fuzzy logic is applied to represent human suboptimal knowledge, and agents are allowed to freely decide how to leverage the proposed prior knowledge. Additionally, a graph-based group controller is built to enhance agent coordination. The proposed framework is end-to-end and compatible with various existing algorithms. We conduct experiments in challenging domains of the StarCraft Multi-agent Challenge combined with three famous algorithms: IQL, QMIX, and Qatten. The results show that our approach can greatly accelerate the training process and improve the final performance, even based on low-performance human prior knowledge. | Integrating Suboptimal Human Knowledge with Hierarchical Reinforcement Learning for Large-Scale Multiagent Systems | [
"Dingbang Liu",
"Shohei Kato",
"Wen Gu",
"Fenghui Ren",
"Jun Yan",
"Guoxin Su"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NGIIHlAEBt | @inproceedings{
zeng2024understanding,
title={Understanding Bias in Large-Scale Visual Datasets},
author={Boya Zeng and Yida Yin and Zhuang Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NGIIHlAEBt}
} | A recent study has shown that large-scale pretraining datasets are very biased: they can be easily classified by modern neural networks. However, the concrete forms of bias among these datasets remain unclear. In this study, we propose a framework to identify the unique visual attributes distinguishing these datasets. Our approach applies various transformations to extract semantic, structural, boundary, color, and frequency information from datasets and assess how much each type of information contributes to their bias. We further decompose their semantic bias with object-level queries, and leverage natural language methods to generate detailed, open-ended descriptions of each dataset's characteristics. Our work aims to help researchers understand the bias in existing large-scale datasets and build more diverse and representative ones in the future. Our project page and code are available at boyazeng.github.io/understand_bias | Understanding Bias in Large-Scale Visual Datasets | [
"Boya Zeng",
"Yida Yin",
"Zhuang Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NG16csOmcA | @inproceedings{
ma2024neural,
title={Neural Residual Diffusion Models for Deep Scalable Vision Generation},
author={Zhiyuan Ma and Liangliang Zhao and Biqing Qi and Bowen Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NG16csOmcA}
} | The most advanced diffusion models have recently adopted increasingly deep stacked networks (e.g., U-Net or Transformer) to promote the generative emergence capabilities of vision generation models similar to large language models (LLMs). However, progressively deeper stacked networks will intuitively cause numerical propagation errors and reduce noisy prediction capabilities on generative data, which hinders massively deep scalable training of vision generation models. In this paper, we first uncover the nature that neural networks being able to effectively perform generative denoising lies in the fact that the intrinsic residual unit has consistent dynamic property with the input signal's reverse diffusion process, thus supporting excellent generative abilities.
Afterwards, we stand on the shoulders of two common types of deep stacked networks to propose a unified and massively scalable Neural Residual Diffusion Models framework (Neural-RDM for short), which is a simple yet meaningful change to the common architecture of deep generative networks by introducing a series of learnable gated residual parameters that conform to the generative dynamics. Experimental results on various generative tasks show that the proposed neural residual models obtain state-of-the-art scores on image's and video's generative benchmarks. Rigorous theoretical proofs and extensive experiments also demonstrate the advantages of this simple gated residual mechanism consistent with dynamic modeling in improving the fidelity and consistency of generated content and supporting large-scale scalable training. | Neural Residual Diffusion Models for Deep Scalable Vision Generation | [
"Zhiyuan Ma",
"Liangliang Zhao",
"Biqing Qi",
"Bowen Zhou"
] | NeurIPS.cc/2024/Conference | 2406.13215 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NDs9Ejz4Pe | @inproceedings{
lim2024dipex,
title={Di{PE}x: Dispersing Prompt Expansion for Class-Agnostic Object Detection},
author={Jia Syuen Lim and Zhuoxiao Chen and Zhi Chen and Mahsa Baktashmotlagh and Xin Yu and Zi Huang and Yadan Luo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NDs9Ejz4Pe}
} | Class-agnostic object detection (OD) can be a cornerstone or a bottleneck for many downstream vision tasks. Despite considerable advancements in bottom-up and multi-object discovery methods that leverage basic visual cues to identify salient objects, consistently achieving a high recall rate remains difficult due to the diversity of object types and their contextual complexity. In this work, we investigate using vision-language models (VLMs) to enhance object detection via a self-supervised prompt learning strategy. Our initial findings indicate that manually crafted text queries often result in undetected objects, primarily because detection confidence diminishes when the query words exhibit semantic overlap. To address this, we propose a Dispersing Prompt Expansion (DiPEx) approach. DiPEx progressively learns to expand a set of distinct, non-overlapping hyperspherical prompts to enhance recall rates, thereby improving performance in downstream tasks such as out-of-distribution OD. Specifically, DiPEx initiates the process by self-training generic parent prompts and selecting the one with the highest semantic uncertainty for further expansion. The resulting child prompts are expected to inherit semantics from their parent prompts while capturing more fine-grained semantics. We apply dispersion losses to ensure high inter-class discrepancy among child prompts while preserving semantic consistency between parent-child prompt pairs. To prevent excessive growth of the prompt sets, we utilize the maximum angular coverage (MAC) of the semantic space as a criterion for early termination. We demonstrate the effectiveness of DiPEx through extensive class-agnostic OD and OOD-OD experiments on MS-COCO and LVIS, surpassing other prompting methods by up to 20.1% in AR and achieving a 21.3% AP improvement over SAM. | DiPEx: Dispersing Prompt Expansion for Class-Agnostic Object Detection | [
"Jia Syuen Lim",
"Zhuoxiao Chen",
"Zhi Chen",
"Mahsa Baktashmotlagh",
"Xin Yu",
"Zi Huang",
"Yadan Luo"
] | NeurIPS.cc/2024/Conference | 2406.14924 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NCX3Kgb1nh | @inproceedings{
rioux2024multivariate,
title={Multivariate Stochastic Dominance via Optimal Transport and Applications to Models Benchmarking},
author={Gabriel Rioux and Apoorva Nitsure and Mattia Rigotti and Kristjan Greenewald and Youssef Mroueh},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NCX3Kgb1nh}
} | Stochastic dominance is an important concept in probability theory, econometrics and social choice theory for robustly modeling agents' preferences between random outcomes. While many works have been dedicated to the univariate case,
little has been done in the multivariate scenario, wherein an agent has to decide between different multivariate outcomes. By exploiting a characterization of multivariate first stochastic dominance in terms of couplings, we introduce a statistic that assesses multivariate almost stochastic dominance under the framework of Optimal Transport with a smooth cost. Further, we introduce an entropic regularization of this statistic, and establish a central limit theorem (CLT) and consistency of the bootstrap procedure for the empirical statistic. Armed with this CLT, we propose a hypothesis testing framework as well as an efficient implementation using the Sinkhorn algorithm. We showcase our method in comparing and benchmarking Large Language Models that are evaluated on multiple metrics. Our multivariate stochastic dominance test allows us to capture the dependencies between the metrics in order to make an informed and statistically significant decision on the relative performance of the models. | Multivariate Stochastic Dominance via Optimal Transport and Applications to Models Benchmarking | [
"Gabriel Rioux",
"Apoorva Nitsure",
"Mattia Rigotti",
"Kristjan Greenewald",
"Youssef Mroueh"
] | NeurIPS.cc/2024/Conference | 2406.06425 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NBq1vmfP4X | @inproceedings{
bergstr{\"a}{\ss}er2024the,
title={The Power of Hard Attention Transformers on Data Sequences: A formal language theoretic perspective},
author={Pascal Bergstr{\"a}{\ss}er and Chris K{\"o}cher and Anthony Widjaja Lin and Georg Zetzsche},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NBq1vmfP4X}
} | Formal language theory has recently been successfully employed to unravel
the power of transformer encoders. This setting is primarily applicable in
Natural Language Processing (NLP), as a token embedding function (where
a bounded number of tokens is admitted) is first applied before feeding
the input to the transformer.
On certain kinds of data (e.g. time
series), we want our transformers to be able to handle arbitrary
input sequences of numbers (or tuples thereof) without a priori
limiting the values of these numbers. In this
paper, we initiate the study of the expressive power of transformer encoders
on sequences of data (i.e. tuples of numbers).
Our results indicate an increase in expressive power of
hard attention transformers over data sequences, in stark contrast to the
case of strings.
In particular, we prove that Unique Hard Attention Transformers (UHAT) over
inputs as data sequences no longer lie within the circuit complexity
class AC0 (even without positional encodings), unlike the case of string
inputs,
but are still within the complexity class TC0 (even with positional
encodings). Over strings, UHAT without positional encodings capture only
regular languages. In contrast, we show that over data sequences
UHAT can capture non-regular properties.
Finally, we show that UHAT capture languages
definable in an extension of linear temporal logic with unary numeric
predicates and arithmetics. | The Power of Hard Attention Transformers on Data Sequences: A formal language theoretic perspective | [
"Pascal Bergsträßer",
"Chris Köcher",
"Anthony Widjaja Lin",
"Georg Zetzsche"
] | NeurIPS.cc/2024/Conference | 2405.16166 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=NAcHv7vtL2 | @inproceedings{
jain2024scaling,
title={Scaling laws for learning with real and surrogate data},
author={Ayush Jain and Andrea Montanari and Eren Sasoglu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=NAcHv7vtL2}
} | Collecting large quantities of high-quality data can be prohibitively expensive or impractical, and a bottleneck in machine learning. One may instead augment a small set of $n$ data points from the target distribution with data from more accessible sources, e.g. data collected under different circumstances or synthesized by generative models. We refer to such data as `surrogate data'. We study a weighted empirical risk minimization (ERM) approach for integrating surrogate data into training. We analyze mathematically this method under several classical statistical models, and validate our findings empirically on datasets from different domains. Our main findings are: $(i)$ Integrating surrogate data can significantly reduce the test error on the original distribution. Surprisingly, this can happen even when the surrogate data is unrelated to the original ones. We trace back this behavior to the classical Stein's paradox. $(ii)$ In order to reap the benefit of surrogate data, it is crucial to use optimally weighted ERM. $(iii)$ The test error of models trained on mixtures of real and surrogate data is approximately described by a scaling law. This scaling law can be used to predict the optimal weighting scheme, and to choose the amount of surrogate data to add. | Scaling laws for learning with real and surrogate data | [
"Ayush Jain",
"Andrea Montanari",
"Eren Sasoglu"
] | NeurIPS.cc/2024/Conference | 2402.04376 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N8YbGX98vc | @inproceedings{
ye2024tfg,
title={{TFG}: Unified Training-Free Guidance for Diffusion Models},
author={Haotian Ye and Haowei Lin and Jiaqi Han and Minkai Xu and Sheng Liu and Yitao Liang and Jianzhu Ma and James Zou and Stefano Ermon},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N8YbGX98vc}
} | Given an unconditional diffusion model and a predictor for a target property of interest (e.g., a classifier), the goal of training-free guidance is to generate samples with desirable target properties without additional training. Existing methods, though effective in various individual applications, often lack theoretical grounding and rigorous testing on extensive benchmarks. As a result, they could even fail on simple tasks, and applying them to a new problem becomes unavoidably difficult. This paper introduces a novel algorithmic framework encompassing existing methods as special cases, unifying the study of training-free guidance into the analysis of an algorithm-agnostic design space. Via theoretical and empirical investigation, we propose an efficient and effective hyper-parameter searching strategy that can be readily applied to any downstream task. We systematically benchmark across 7 diffusion models on 16 tasks with 40 targets, and improve performance by 8.5% on average. Our framework and benchmark offer a solid foundation for conditional generation in a training-free manner. | TFG: Unified Training-Free Guidance for Diffusion Models | [
"Haotian Ye",
"Haowei Lin",
"Jiaqi Han",
"Minkai Xu",
"Sheng Liu",
"Yitao Liang",
"Jianzhu Ma",
"James Zou",
"Stefano Ermon"
] | NeurIPS.cc/2024/Conference | 2409.15761 | [
""
] | https://huggingface.co/papers/2409.15761 | 0 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=N6zJ8DclC2 | @inproceedings{
hao2024natural,
title={Natural Counterfactuals With Necessary Backtracking},
author={Guang-Yuan Hao and Jiji Zhang and Biwei Huang and Hao Wang and Kun Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N6zJ8DclC2}
} | Counterfactual reasoning is pivotal in human cognition and especially important for providing explanations and making decisions. While Judea Pearl's influential approach is theoretically elegant, its generation of a counterfactual scenario often requires too much deviation from the observed scenarios to be feasible, as we show using simple examples. To mitigate this difficulty, we propose a framework of natural counterfactuals and a method for generating counterfactuals that are more feasible with respect to the actual data distribution. Our methodology incorporates a certain amount of backtracking when needed, allowing changes in causally preceding variables to minimize deviations from realistic scenarios. Specifically, we introduce a novel optimization framework that permits but also controls the extent of backtracking with a "naturalness'' criterion. Empirical experiments demonstrate the effectiveness of our method. The code is available at https://github.com/GuangyuanHao/natural_counterfactuals. | Natural Counterfactuals With Necessary Backtracking | [
"Guang-Yuan Hao",
"Jiji Zhang",
"Biwei Huang",
"Hao Wang",
"Kun Zhang"
] | NeurIPS.cc/2024/Conference | 2402.01607 | [
"https://github.com/GuangyuanHao/natural_counterfactuals"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N5H4z0Pzvn | @inproceedings{
silva2024on,
title={On Divergence Measures for Training {GF}lowNets},
author={Tiago Silva and Eliezer de Souza da Silva and Diego Mesquita},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N5H4z0Pzvn}
} | Generative Flow Networks (GFlowNets) are amortized samplers of unnormalized distributions over compositional objects with applications to causal discovery, NLP, and drug design. Recently, it was shown that GFlowNets can be framed as a hierarchical variational inference (HVI) method for discrete distributions. Despite this equivalence, attempts to train GFlowNets using traditional divergence measures as learning objectives were unsuccessful. Instead, current approaches for training these models rely on minimizing the log-squared difference between a proposal (forward policy) and a target (backward policy) distributions. In this work, we first formally extend the relationship between GFlowNets and HVI to distributions on arbitrary measurable topological spaces. Then, we empirically show that the ineffectiveness of divergence-based learning of GFlowNets is due to large gradient variance of the corresponding stochastic objectives. To address this issue, we devise a collection of provably variance-reducing control variates for gradient estimation based on the REINFORCE leave-one-out estimator. Our experimental results suggest that the resulting algorithms often accelerate training convergence when compared against previous approaches. All in all, our work contributes by narrowing the gap between GFlowNet training and HVI, paving the way for algorithmic advancements inspired by the divergence minimization viewpoint. | On Divergence Measures for Training GFlowNets | [
"Tiago Silva",
"Eliezer de Souza da Silva",
"Diego Mesquita"
] | NeurIPS.cc/2024/Conference | 2410.09355 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N4quRxE19p | @inproceedings{
wu2024avatar,
title={AvaTaR: Optimizing {LLM} Agents for Tool Usage via Contrastive Reasoning},
author={Shirley Wu and Shiyu Zhao and Qian Huang and Kexin Huang and Michihiro Yasunaga and Kaidi Cao and Vassilis N. Ioannidis and Karthik Subbian and Jure Leskovec and James Zou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N4quRxE19p}
} | Large language model (LLM) agents have demonstrated impressive capabilities in utilizing external tools and knowledge to boost accuracy and reduce hallucinations. However, developing prompting techniques that enable LLM agents to effectively use these tools and knowledge remains a heuristic and labor-intensive task. Here, we introduce AvaTaR, a novel and automated framework that optimizes an LLM agent to effectively leverage provided tools, improving performance on a given task. During optimization, we design a comparator module to iteratively deliver insightful and comprehensive prompts to the LLM agent by contrastively reasoning between positive and negative examples sampled from training data. We demon- strate AvaTaR on four complex multimodal retrieval datasets featuring textual, visual, and relational information, and three general question-answering (QA) datasets. We find AvaTaR consistently outperforms state-of-the-art approaches across all seven tasks, exhibiting strong generalization ability when applied to novel cases and achieving an average relative improvement of 14% on the Hit@1 metric for the retrieval datasets and 13% for the QA datasets. Code and dataset are available at https://github.com/zou-group/avatar. | AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning | [
"Shirley Wu",
"Shiyu Zhao",
"Qian Huang",
"Kexin Huang",
"Michihiro Yasunaga",
"Kaidi Cao",
"Vassilis N. Ioannidis",
"Karthik Subbian",
"Jure Leskovec",
"James Zou"
] | NeurIPS.cc/2024/Conference | 2406.11200 | [
"https://github.com/zou-group/avatar"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N2wYPMpifA | @inproceedings{
havrilla2024understanding,
title={Understanding Scaling Laws with Statistical and Approximation Theory for Transformer Neural Networks on Intrinsically Low-dimensional Data},
author={Alexander Havrilla and Wenjing Liao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N2wYPMpifA}
} | When training deep neural networks, a model's generalization error is often observed to follow a power scaling law dependent both on the model size and the data size. Perhaps the best known example of such scaling laws are for transformer-based large language models (**LLMs**), where networks with billions of parameters are trained on trillions of tokens of text. Yet, despite sustained widespread interest, a rigorous understanding of why transformer scaling laws exist is still missing. To answer this question, we establish novel statistical estimation and mathematical approximation theories for transformers when the input data are concentrated on a low-dimensional manifold. Our theory predicts a power law between the generalization error and both the training data size and the network size for transformers, where the power depends on the intrinsic dimension $d$ of the training data. Notably, the constructed model architecture is shallow, requiring only logarithmic depth in $d$. By leveraging low-dimensional data structures under a manifold hypothesis, we are able to explain transformer scaling laws in a way which respects the data geometry. Moreover, we test our theory with empirical observation by training LLMs on natural language datasets. We find the observed empirical scaling laws closely agree with our theoretical predictions. Taken together, these results rigorously show the intrinsic dimension of data to be a crucial quantity affecting transformer scaling laws in both theory and practice. | Understanding Scaling Laws with Statistical and Approximation Theory for Transformer Neural Networks on Intrinsically Low-dimensional Data | [
"Alexander Havrilla",
"Wenjing Liao"
] | NeurIPS.cc/2024/Conference | 2411.06646 | [
"https://github.com/dahoas/transformer_manifolds_learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N2RaC7LO6k | @inproceedings{
lei2024geometry,
title={Geometry of naturalistic object representations in recurrent neural network models of working memory},
author={Xiaoxuan Lei and Takuya Ito and Pouya Bashivan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N2RaC7LO6k}
} | Working memory is a central cognitive ability crucial for intelligent decision-making. Recent experimental and computational work studying working memory has primarily used categorical (i.e., one-hot) inputs, rather than ecologically-relevant, multidimensional naturalistic ones. Moreover, studies have primarily investigated working memory during single or few number of cognitive tasks. As a result, an understanding of how naturalistic object information is maintained in working memory in neural networks is still lacking. To bridge this gap, we developed sensory-cognitive models, comprising of a convolutional neural network (CNN) coupled with a recurrent neural network (RNN), and trained them on nine distinct N-back tasks using naturalistic stimuli. By examining the RNN’s latent space, we found that: 1) Multi-task RNNs represent both task-relevant and irrelevant information simultaneously while performing tasks; 2) While the latent subspaces used to maintain specific object properties in vanilla RNNs are largely shared across tasks, they are highly task-specific in gated RNNs such as GRU and LSTM; 3) Surprisingly, RNNs embed objects in new representational spaces in which individual object features are less orthogonalized relative to the perceptual space; 4) Interestingly, the transformation of WM encodings (i.e., embedding of visual inputs in the RNN latent space) into memory was shared across stimuli, yet the transformations governing the retention of a memory in the face of incoming distractor stimuli were distinct across time. Our findings indicate that goal-driven RNNs employ chronological memory subspaces to track information over short time spans, enabling testable predictions with neural data. | Geometry of naturalistic object representations in recurrent neural network models of working memory | [
"Xiaoxuan Lei",
"Takuya Ito",
"Pouya Bashivan"
] | NeurIPS.cc/2024/Conference | 2411.02685 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N2PwbxJ3o6 | @inproceedings{
xu2024towards,
title={Towards Global Optimal Visual In-Context Learning Prompt Selection},
author={Chengming Xu and Chen Liu and Yikai Wang and Yuan Yao and Yanwei Fu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N2PwbxJ3o6}
} | Visual In-Context Learning (VICL) is a prevailing way to transfer visual foundation models to new tasks by leveraging contextual information contained in in-context examples to enhance learning and prediction of query sample. The fundamental problem in VICL is how to select the best prompt to activate its power as much as possible, which is equivalent to the ranking problem to test the in-context behavior of each candidate in the alternative set and select the best one. To utilize more appropriate ranking metric and leverage more comprehensive information among the alternative set, we propose a novel in-context example selection framework to approximately identify the global optimal prompt, i.e. choosing the best performing in-context examples from all alternatives for each query sample. Our method, dubbed Partial2Global, adopts a transformer-based list-wise ranker to provide a more comprehensive comparison within several alternatives, and a consistency-aware ranking aggregator to generate globally consistent ranking. The effectiveness of Partial2Global is validated through experiments on foreground segmentation, single object detection and image colorization, demonstrating that Partial2Global selects consistently better in-context examples compared with other methods, and thus establish the new state-of-the-arts. | Towards Global Optimal Visual In-Context Learning Prompt Selection | [
"Chengming Xu",
"Chen Liu",
"Yikai Wang",
"Yuan Yao",
"Yanwei Fu"
] | NeurIPS.cc/2024/Conference | 2405.15279 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N12B6wvA55 | @inproceedings{
bonet2024mirror,
title={Mirror and Preconditioned Gradient Descent in Wasserstein Space},
author={Cl{\'e}ment Bonet and Th{\'e}o Uscidda and Adam David and Pierre-Cyril Aubin-Frankowski and Anna Korba},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N12B6wvA55}
} | As the problem of minimizing functionals on the Wasserstein space encompasses many applications in machine learning, different optimization algorithms on $\mathbb{R}^d$ have received their counterpart analog on the Wasserstein space. We focus here on lifting two explicit algorithms: mirror descent and preconditioned gradient descent. These algorithms have been introduced to better capture the geometry of the function to minimize and are provably convergent under appropriate (namely relative) smoothness and convexity conditions. Adapting these notions to the Wasserstein space, we prove guarantees of convergence of some Wasserstein-gradient-based discrete-time schemes for new pairings of objective functionals and regularizers. The difficulty here is to carefully select along which curves the functionals should be smooth and convex. We illustrate the advantages of adapting the geometry induced by the regularizer on ill conditioned optimization tasks, and showcase the improvement of choosing different discrepancies and geometries in a computational biology task of aligning single-cells. | Mirror and Preconditioned Gradient Descent in Wasserstein Space | [
"Clément Bonet",
"Théo Uscidda",
"Adam David",
"Pierre-Cyril Aubin-Frankowski",
"Anna Korba"
] | NeurIPS.cc/2024/Conference | 2406.08938 | [
"https://github.com/clbonet/Mirror_and_Preconditioned_Gradient_Descent_in_Wasserstein_Space"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=N0xNf9Qqmc | @inproceedings{
couairon2024diffcut,
title={DiffCut: Catalyzing Zero-Shot Semantic Segmentation with Diffusion Features and Recursive Normalized Cut},
author={Paul Couairon and Mustafa Shukor and Jean-Emmanuel HAUGEARD and Matthieu Cord and Nicolas THOME},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=N0xNf9Qqmc}
} | Foundation models have emerged as powerful tools across various domains including language, vision, and multimodal tasks. While prior works have addressed unsupervised semantic segmentation, they significantly lag behind supervised models. In this paper, we use a diffusion UNet encoder as a foundation vision encoder and introduce DiffCut, an unsupervised zero-shot segmentation method that solely harnesses the output features from the final self-attention block. Through extensive experimentation, we demonstrate that using these diffusion features in a graph based segmentation algorithm, significantly outperforms previous state-of-the-art methods on zero-shot segmentation. Specifically, we leverage a recursive Normalized Cut algorithm that regulates the granularity of detected objects and produces well-defined segmentation maps that precisely capture intricate image details. Our work highlights the remarkably accurate semantic knowledge embedded within diffusion UNet encoders that could then serve as foundation vision encoders for downstream tasks. | DiffCut: Catalyzing Zero-Shot Semantic Segmentation with Diffusion Features and Recursive Normalized Cut | [
"Paul Couairon",
"Mustafa Shukor",
"Jean-Emmanuel HAUGEARD",
"Matthieu Cord",
"Nicolas THOME"
] | NeurIPS.cc/2024/Conference | 2406.02842 | [
"https://github.com/paulcouairon/diffcut"
] | https://huggingface.co/papers/2406.02842 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MzTdZhMjeC | @inproceedings{
wang2024moddn,
title={{MO}-{DDN}: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-Object Demand-driven Navigation},
author={Hongcheng Wang and Peiqi Liu and Wenzhe Cai and Mingdong Wu and Zhengyu Qian and Hao Dong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MzTdZhMjeC}
} | The process of satisfying daily demands is a fundamental aspect of humans' daily lives. With the advancement of embodied AI, robots are increasingly capable of satisfying human demands. Demand-driven navigation (DDN) is a task in which an agent must locate an object to satisfy a specified demand instruction, such as "I am thirsty." The previous study typically assumes that each demand instruction requires only one object to be fulfilled and does not consider individual preferences. However, the realistic human demand may involve multiple objects. In this paper, we introduce the Multi-object Demand-driven Navigation (MO-DDN) benchmark, which addresses these nuanced aspects, including multi-object search and personal preferences, thus making the MO-DDN task more reflective of real-life scenarios compared to DDN. Building upon previous work, we employ the concept of ``attribute'' to tackle this new task. However, instead of solely relying on attribute features in an end-to-end manner like DDN, we propose a modular method that involves constructing a coarse-to-fine attribute-based exploration agent (C2FAgent). Our experimental results illustrate that this coarse-to-fine exploration strategy capitalizes on the advantages of attributes at various decision-making levels, resulting in superior performance compared to baseline methods. Code and video can be found at https://sites.google.com/view/moddn. | MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-Object Demand-driven Navigation | [
"Hongcheng Wang",
"Peiqi Liu",
"Wenzhe Cai",
"Mingdong Wu",
"Zhengyu Qian",
"Hao Dong"
] | NeurIPS.cc/2024/Conference | 2410.03488 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MzNjnbgcPN | @inproceedings{
shu2024optex,
title={OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations},
author={Yao Shu and Jiongfeng Fang and Ying Tiffany He and Fei Richard Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MzNjnbgcPN}
} | First-order optimization (FOO) algorithms are pivotal in numerous computational domains, such as reinforcement learning and deep learning. However, their application to complex tasks often entails significant optimization inefficiency due to their need of many sequential iterations for convergence. In response, we introduce first-order optimization expedited with approximately parallelized iterations (OptEx), the first general framework that enhances the time efficiency of FOO by leveraging parallel computing to directly mitigate its requirement of many sequential iterations for convergence. To achieve this, OptEx utilizes a kernelized gradient estimation that is based on the history of evaluated gradients to predict the gradients required by the next few sequential iterations in FOO, which helps to break the inherent iterative dependency and hence enables the approximate parallelization of iterations in FOO. We further establish theoretical guarantees for the estimation error of our kernelized gradient estimation and the iteration complexity of SGD-based OptEx, confirming that the estimation error diminishes to zero as the history of gradients accumulates and that our SGD-based OptEx enjoys an effective acceleration rate of Θ(√N ) over standard SGD given parallelism of N, in terms of the sequential iterations required for convergence. Finally, we provide extensive empirical studies, including synthetic functions, reinforcement learning tasks, and neural network training on various datasets, to underscore the substantial efficiency improvements achieved by our OptEx in practice. | OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations | [
"Yao Shu",
"Jiongfeng Fang",
"Ying Tiffany He",
"Fei Richard Yu"
] | NeurIPS.cc/2024/Conference | 2402.11427 | [
"https://github.com/youyve/OptEx"
] | https://huggingface.co/papers/2402.11427 | 1 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MzM99vV5Rx | @inproceedings{
li2024iqaeval,
title={{IQA}-{EVAL}: Automatic Evaluation of Human-Model Interactive Question Answering},
author={Ruosen Li and Ruochen Li and Barry Wang and Xinya Du},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MzM99vV5Rx}
} | To evaluate Large Language Models (LLMs) for question answering (QA), traditional methods typically focus on directly assessing the immediate responses generated by the models based on the given question and context. In the common use case of humans seeking AI assistant’s help in finding information, these non-interactive evaluations do not account for the dynamic nature of human-model conversations, and interaction-aware evaluations have shown that accurate models are not necessarily preferred by humans Lee et al. Recent works in human-computer interaction (HCI) have employed human evaluators to conduct interactions and evaluations, but they are often prohibitively expensive and time-consuming to scale. In this work, we introduce an automated evaluation framework IQA-EVAL to Interactive Question Answering Evaluations, more specifically, we introduce LLM-based Evaluation Agent (LEA) that can: (1) simulate human behaviors to generate interactions with IQA models; (2) automatically evaluate the generated interactions. Moreover, we propose assigning personas to LEAs to better simulate groups of real human evaluators. We show that: (1) our evaluation framework with GPT-4 (or Claude) as the backbone model achieves a high correlation with human evaluations on the IQA task; (2) assigning personas to LEA to better represent the crowd further significantly improves correlations. Finally, we use our automated metric to evaluate five recent LLMs with over 1000 questions from complex and ambiguous question answering tasks, which would cost $5k if evaluated by humans. | IQA-EVAL: Automatic Evaluation of Human-Model Interactive Question Answering | [
"Ruosen Li",
"Ruochen Li",
"Barry Wang",
"Xinya Du"
] | NeurIPS.cc/2024/Conference | 2408.13545 | [
""
] | https://huggingface.co/papers/2408.13545 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MyVyH5Jo1l | @inproceedings{
charikar2024quantifying,
title={Quantifying the Gain in Weak-to-Strong Generalization},
author={Moses Charikar and Chirag Pabbaraju and Kirankumar Shiragur},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MyVyH5Jo1l}
} | Recent advances in large language models have shown capabilities that are extraordinary and near-superhuman. These models operate with such complexity that reliably evaluating and aligning them proves challenging for humans. This leads to the natural question: can guidance from weak models (like humans) adequately direct the capabilities of strong models? In a recent and somewhat surprising work, Burns et al. (2023) empirically demonstrated that when strong models (like GPT-4) are finetuned using labels generated by weak supervisors (like GPT-2), the strong models outperform their weaker counterparts---a phenomenon they term *weak-to-strong generalization*.
In this work, we present a theoretical framework for understanding weak-to-strong generalization. Specifically, we show that the improvement in performance achieved by strong models over their weaker counterparts is quantified by the *misfit error* incurred by the strong model on labels generated by the weaker model. Our theory reveals several curious algorithmic insights. For instance, we can predict the amount by which the strong model will improve over the weak model, and also choose among different weak models to train the strong model, based on its misfit error. We validate our theoretical findings through various empirical assessments. | Quantifying the Gain in Weak-to-Strong Generalization | [
"Moses Charikar",
"Chirag Pabbaraju",
"Kirankumar Shiragur"
] | NeurIPS.cc/2024/Conference | 2405.15116 | [
"https://github.com/chogba/wtsg-regression"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MxdyGXoK9h | @inproceedings{
yang2024boosting,
title={Boosting Weakly Supervised Referring Image Segmentation via Progressive Comprehension},
author={Zaiquan Yang and Yuhao LIU and Jiaying Lin and Gerhard Petrus Hancke and Rynson W. H. Lau},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MxdyGXoK9h}
} | This paper explores the weakly-supervised referring image segmentation (WRIS) problem, and focuses on a challenging setup where target localization is learned directly from image-text pairs.
We note that the input text description typically already contains detailed information on how to localize the target object, and we also observe that humans often follow a step-by-step comprehension process (\ie, progressively utilizing target-related attributes and relations as cues) to identify the target object.
Hence, we propose a novel Progressive Comprehension Network (PCNet) to leverage target-related textual cues from the input description for progressively localizing the target object.
Specifically, we first use a Large Language Model (LLM) to decompose the input text description into short phrases. These short phrases are taken as target-related cues and fed into a Conditional Referring Module (CRM) in multiple stages, to allow updating the referring text embedding and enhance the response map for target localization in a multi-stage manner.
Based on the CRM, we then propose a Region-aware Shrinking (RaS) loss to constrain the visual localization to be conducted progressively in a coarse-to-fine manner across different stages.
Finally, we introduce an Instance-aware Disambiguation (IaD) loss to suppress instance localization ambiguity by differentiating overlapping response maps generated by different referring texts on the same image.
Extensive experiments show that our method outperforms SOTA methods on three common benchmarks. | Boosting Weakly Supervised Referring Image Segmentation via Progressive Comprehension | [
"Zaiquan Yang",
"Yuhao LIU",
"Jiaying Lin",
"Gerhard Petrus Hancke",
"Rynson W. H. Lau"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MxWpCherzD | @inproceedings{
elaldi2024equivariant,
title={Equivariant spatio-hemispherical networks for diffusion {MRI} deconvolution},
author={Axel Elaldi and Guido Gerig and Neel Dey},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MxWpCherzD}
} | Each voxel in a diffusion MRI (dMRI) image contains a spherical signal corresponding to the direction and strength of water diffusion in the brain. This paper advances the analysis of such spatio-spherical data by developing convolutional network layers that are equivariant to the $\mathbf{E(3) \times SO(3)}$ group and account for the physical symmetries of dMRI including rotations, translations, and reflections of space alongside voxel-wise rotations. Further, neuronal fibers are typically antipodally symmetric, a fact we leverage to construct highly efficient spatio-*hemispherical* graph convolutions to accelerate the analysis of high-dimensional dMRI data. In the context of sparse spherical fiber deconvolution to recover white matter microstructure, our proposed equivariant network layers yield substantial performance and efficiency gains, leading to better and more practical resolution of crossing neuronal fibers and fiber tractography. These gains are experimentally consistent across both simulation and in vivo human datasets. | Equivariant spatio-hemispherical networks for diffusion MRI deconvolution | [
"Axel Elaldi",
"Guido Gerig",
"Neel Dey"
] | NeurIPS.cc/2024/Conference | 2411.11819 | [
"https://github.com/axelelaldi/fast-equivariant-deconv"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MxF0IKJtKW | @inproceedings{
ling2024slimgpt,
title={Slim{GPT}: Layer-wise Structured Pruning for Large Language Models},
author={Gui Ling and Ziyang Wang and YuliangYan and Qingwen Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MxF0IKJtKW}
} | Large language models (LLMs) have garnered significant attention for their remarkable capabilities across various domains, whose vast parameter scales present challenges for practical deployment. Structured pruning is an effective method to balance model performance with efficiency, but performance restoration under computational resource constraints is a principal challenge in pruning LLMs. Therefore, we present a low-cost and fast structured pruning method for LLMs named SlimGPT based on the Optimal Brain Surgeon framework. We propose Batched Greedy Pruning for rapid and near-optimal pruning, which enhances the accuracy of head-wise pruning error estimation through grouped Cholesky decomposition and improves the pruning efficiency of FFN via Dynamic Group Size, thereby achieving approximate local optimal pruning results within one hour. Besides, we explore the limitations of layer-wise pruning from the perspective of error accumulation and propose Incremental Pruning Ratio, a non-uniform pruning strategy to reduce performance degradation. Experimental results on the LLaMA benchmark show that SlimGPT outperforms other methods and achieves state-of-the-art results. | SlimGPT: Layer-wise Structured Pruning for Large Language Models | [
"Gui Ling",
"Ziyang Wang",
"YuliangYan",
"Qingwen Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MwmmBg1VYg | @inproceedings{
zhang2024why,
title={Why are Visually-Grounded Language Models Bad at Image Classification?},
author={Yuhui Zhang and Alyssa Unell and Xiaohan Wang and Dhruba Ghosh and Yuchang Su and Ludwig Schmidt and Serena Yeung-Levy},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MwmmBg1VYg}
} | Image classification is one of the most fundamental capabilities of machine vision intelligence. In this work, we revisit the image classification task using visually-grounded language models (VLMs) such as GPT-4V and LLaVA. We find that existing proprietary and public VLMs, despite often using CLIP as a vision encoder and having many more parameters, significantly underperform CLIP on standard image classification benchmarks like ImageNet. To understand the reason, we explore several hypotheses concerning the inference algorithms, training objectives, and data processing in VLMs. Our analysis reveals that the primary cause is data-related: critical information for image classification is encoded in the VLM's latent space but can only be effectively decoded with enough training data. Specifically, there is a strong correlation between the frequency of class exposure during VLM training and instruction-tuning and the VLM's performance in those classes; when trained with sufficient data, VLMs can match the accuracy of state-of-the-art classification models. Based on these findings, we enhance a VLM by integrating classification-focused datasets into its training, and demonstrate that the enhanced classification performance of the VLM transfers to its general capabilities, resulting in an improvement of 11.8% on the newly collected ImageWikiQA dataset. | Why are Visually-Grounded Language Models Bad at Image Classification? | [
"Yuhui Zhang",
"Alyssa Unell",
"Xiaohan Wang",
"Dhruba Ghosh",
"Yuchang Su",
"Ludwig Schmidt",
"Serena Yeung-Levy"
] | NeurIPS.cc/2024/Conference | 2405.18415 | [
"https://github.com/yuhui-zh15/vlmclassifier"
] | https://huggingface.co/papers/2405.18415 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=Mwj57TcHWX | @inproceedings{
wan2024difftori,
title={Diff{TORI}: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning},
author={Weikang Wan and Ziyu Wang and Yufei Wang and Zackory Erickson and David Held},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Mwj57TcHWX}
} | This paper introduces DiffTORI, which utilizes $\textbf{Diff}$erentiable $\textbf{T}$rajectory $\textbf{O}$ptimization as the policy representation to generate actions for deep $\textbf{R}$einforcement and $\textbf{I}$mitation learning. Trajectory optimization is a powerful and widely used algorithm in control, parameterized by a cost and a dynamics function. The key to our approach is to leverage the recent progress in differentiable trajectory optimization, which enables computing the gradients of the loss with respect to the parameters of trajectory optimization. As a result, the cost and dynamics functions of trajectory optimization can be learned end-to-end. DiffTORI addresses the “objective mismatch” issue of prior model-based RL algorithms, as the dynamics model in DiffTORI is learned to directly maximize task performance by differentiating the policy gradient loss through the trajectory optimization process. We further benchmark DiffTORI for imitation learning on standard robotic manipulation task suites with high-dimensional sensory observations and compare our method to feedforward policy classes as well as Energy-Based Models (EBM) and Diffusion. Across 15 model based RL tasks and 35 imitation learning tasks with high-dimensional image and point cloud inputs, DiffTORI outperforms prior state-of-the-art methods in both domains. | DiffTORI: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning | [
"Weikang Wan",
"Ziyu Wang",
"Yufei Wang",
"Zackory Erickson",
"David Held"
] | NeurIPS.cc/2024/Conference | 2402.05421 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=MwJo3zuiTm | @inproceedings{
chen2024freerider,
title={Free-Rider and Conflict Aware Collaboration Formation for Cross-Silo Federated Learning},
author={Mengmeng Chen and Xiaohu Wu and Xiaoli Tang and Tiantian He and Yew-Soon Ong and QIQI LIU and Qicheng Lao and Han Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MwJo3zuiTm}
} | Federated learning (FL) is a machine learning paradigm that allows multiple FL participants (FL-PTs) to collaborate on training models without sharing private data. Due to data heterogeneity, negative transfer may occur in the FL training process. This necessitates FL-PT selection based on their data complementarity. In cross-silo FL, organizations that engage in business activities are key sources of FL-PTs. The resulting FL ecosystem has two features: (i) self-interest, and (ii) competition among FL-PTs. This requires the desirable FL-PT selection strategy to simultaneously mitigate the problems of free riders and conflicts of interest among competitors. To this end, we propose an optimal FL collaboration formation strategy -FedEgoists- which ensures that: (1) a FL-PT can benefit from FL if and only if it benefits the FL ecosystem, and (2) a FL-PT will not contribute to its competitors or their supporters. It provides an efficient clustering solution to group FL-PTs into coalitions, ensuring that within each coalition, FL-PTs share the same interest. We theoretically prove that the FL-PT coalitions formed are optimal since no coalitions can collaborate together to improve the utility of any of their members. Extensive experiments on widely adopted benchmark datasets demonstrate the effectiveness of FedEgoists compared to nine state-of-the-art baseline methods, and its ability to establish efficient collaborative networks in cross-silos FL with FL-PTs that engage in business activities. | Free-Rider and Conflict Aware Collaboration Formation for Cross-Silo Federated Learning | [
"Mengmeng Chen",
"Xiaohu Wu",
"Xiaoli Tang",
"Tiantian He",
"Yew-Soon Ong",
"QIQI LIU",
"Qicheng Lao",
"Han Yu"
] | NeurIPS.cc/2024/Conference | 2410.19321 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MwFeh4RqvA | @inproceedings{
fontanella2024generating,
title={Generating compositional scenes via Text-to-image {RGBA} Instance Generation},
author={Alessandro Fontanella and Petru-Daniel Tudosiu and Yongxin Yang and Shifeng Zhang and Sarah Parisot},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MwFeh4RqvA}
} | Text-to-image diffusion generative models can generate high quality images at the cost of tedious prompt engineering. Controllability can be improved by introducing layout conditioning, however existing methods lack layout editing ability and fine-grained control over object attributes. The concept of multi-layer generation holds great potential to address these limitations, however generating image instances concurrently to scene composition limits control over fine-grained object attributes, relative positioning in 3D space and scene manipulation abilities. In this work, we propose a novel multi-stage generation paradigm that is designed for fine-grained control, flexibility and interactivity. To ensure control over instance attributes, we devise a novel training paradigm to adapt a diffusion model to generate isolated scene components as RGBA images with transparency information. To build complex images, we employ these pre-generated instances and introduce a multi-layer composite generation process that smoothly assembles components in realistic scenes. Our experiments show that our RGBA diffusion model is capable of generating diverse and high quality instances with precise control over object attributes. Through multi-layer composition, we demonstrate that our approach allows to build and manipulate images from highly complex prompts with fine-grained control over object appearance and location, granting a higher degree of control than competing methods. | Generating compositional scenes via Text-to-image RGBA Instance Generation | [
"Alessandro Fontanella",
"Petru-Daniel Tudosiu",
"Yongxin Yang",
"Shifeng Zhang",
"Sarah Parisot"
] | NeurIPS.cc/2024/Conference | 2411.10913 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MvjLRFntW6 | @inproceedings{
parekh2024a,
title={A Concept-Based Explainability Framework for Large Multimodal Models},
author={Jayneel Parekh and Pegah KHAYATAN and Mustafa Shukor and Alasdair Newson and Matthieu Cord},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MvjLRFntW6}
} | Large multimodal models (LMMs) combine unimodal encoders and large language models (LLMs) to perform multimodal tasks. Despite recent advancements towards the interpretability of these models, understanding internal representations of LMMs remains largely a mystery. In this paper, we present a novel framework for the interpretation of LMMs. We propose a dictionary learning based approach, applied to the representation of tokens. The elements of the learned dictionary correspond to our proposed concepts. We show that these concepts are well semantically grounded in both vision and text. Thus we refer to these as ``multi-modal concepts''.
We qualitatively and quantitatively evaluate the results of the learnt concepts. We show that the extracted multimodal concepts are useful to interpret representations of test samples. Finally, we evaluate the disentanglement between different concepts and the quality of grounding concepts visually and textually. Our implementation is publicly available: https://github.com/mshukor/xl-vlms. | A Concept-Based Explainability Framework for Large Multimodal Models | [
"Jayneel Parekh",
"Pegah KHAYATAN",
"Mustafa Shukor",
"Alasdair Newson",
"Matthieu Cord"
] | NeurIPS.cc/2024/Conference | 2406.08074 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MuPlJ9fT4b | @inproceedings{
chen2024dataefficient,
title={Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning},
author={Wuyang Chen and Jialin Song and Pu Ren and Shashank Subramanian and Dmitriy Morozov and Michael W. Mahoney},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MuPlJ9fT4b}
} | Recent years have witnessed the promise of coupling machine learning methods and physical domain-specific insights for solving scientific problems based on partial differential equations (PDEs). However, being data-intensive, these methods still require a large amount of PDE data. This reintroduces the need for expensive numerical PDE solutions, partially undermining the original goal of avoiding these expensive simulations. In this work, seeking data efficiency, we design unsupervised pretraining for PDE operator learning. To reduce the need for training data with heavy simulation costs, we mine unlabeled PDE data without simulated solutions,
and we pretrain neural operators with physics-inspired reconstruction-based proxy tasks. To improve out-of-distribution performance, we further assist neural operators in flexibly leveraging a similarity-based method that learns in-context examples, without incurring extra training costs or designs. Extensive empirical evaluations on a diverse set of PDEs demonstrate that our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models. We provide our code at https://github.com/delta-lab-ai/data_efficient_nopt. | Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning | [
"Wuyang Chen",
"Jialin Song",
"Pu Ren",
"Shashank Subramanian",
"Dmitriy Morozov",
"Michael W. Mahoney"
] | NeurIPS.cc/2024/Conference | 2402.15734 | [
"https://github.com/delta-lab-ai/data_efficient_nopt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Mtsi1eDdbH | @inproceedings{
charusaie2024a,
title={A Unifying Post-Processing Framework for Multi-Objective Learn-to-Defer Problems},
author={Mohammad-Amin Charusaie and Samira Samadi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Mtsi1eDdbH}
} | Learn-to-Defer is a paradigm that enables learning algorithms to work not in isolation but as a team with human experts. In this paradigm, we permit the system to defer a subset of its tasks to the expert. Although there are currently systems that follow this paradigm and are designed to optimize the accuracy of the final human-AI team, the general methodology for developing such systems under a set of constraints (e.g., algorithmic fairness, expert intervention budget, defer of anomaly, etc.) remains largely unexplored. In this paper, using a d-dimensional generalization to the fundamental lemma of Neyman and Pearson (d-GNP), we obtain the Bayes optimal solution for learn-to-defer systems under various constraints. Furthermore, we design a generalizable algorithm to estimate that solution and apply this algorithm to the COMPAS, Hatespeech, and ACSIncome datasets. Our algorithm shows improvements in terms of constraint violation over a set of learn-to-defer baselines and can control multiple constraint violations at once. The use of d-GNP is beyond learn-to-defer applications and can potentially obtain a solution to decision-making problems with a set of controlled expected performance measures. | A Unifying Post-Processing Framework for Multi-Objective Learn-to-Defer Problems | [
"Mohammad-Amin Charusaie",
"Samira Samadi"
] | NeurIPS.cc/2024/Conference | 2407.12710 | [
"https://github.com/aminchrs/postprocess"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MtRvzJBsBA | @inproceedings{
xie2024lrmzero,
title={{LRM}-Zero: Training Large Reconstruction Models with Synthesized Data},
author={Desai Xie and Sai Bi and Zhixin Shu and Kai Zhang and Zexiang Xu and Yi Zhou and Soren Pirk and Arie Kaufman and Xin Sun and Hao Tan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MtRvzJBsBA}
} | We present LRM-Zero, a Large Reconstruction Model (LRM) trained entirely on synthesized 3D data, achieving high-quality sparse-view 3D reconstruction. The core of LRM-Zero is our procedural 3D dataset, Zeroverse, which is automatically synthesized from simple primitive shapes with random texturing and augmentations (e.g., height fields, boolean differences, and wireframes). Unlike previous 3D datasets (e.g., Objaverse) which are often captured or crafted by humans to approximate real 3D data, Zeroverse completely ignores realistic global semantics but is rich in complex geometric and texture details that are locally similar to or even more intricate than real objects. We demonstrate that our LRM-Zero, trained with our fully synthesized Zeroverse, can achieve high visual quality in the reconstruction of real-world objects, competitive with models trained on Objaverse. We also analyze several critical design choices of Zeroverse that contribute to LRM-Zero's capability and training stability. Our work demonstrates that 3D reconstruction, one of the core tasks in 3D vision, can potentially be addressed without the semantics of real-world objects. The Zeroverse's procedural synthesis code and interactive visualization are available at: https://desaixie.github.io/lrm-zero/. | LRM-Zero: Training Large Reconstruction Models with Synthesized Data | [
"Desai Xie",
"Sai Bi",
"Zhixin Shu",
"Kai Zhang",
"Zexiang Xu",
"Yi Zhou",
"Soren Pirk",
"Arie Kaufman",
"Xin Sun",
"Hao Tan"
] | NeurIPS.cc/2024/Conference | 2406.09371 | [
"https://github.com/desaixie/zeroverse"
] | https://huggingface.co/papers/2406.09371 | 7 | 4 | 1 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MsUf8kpKTF | @inproceedings{
juliani2024a,
title={A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning},
author={Arthur Juliani and Jordan T. Ash},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MsUf8kpKTF}
} | Continual learning with deep neural networks presents challenges distinct from both the fixed-dataset and convex continual learning regimes. One such challenge is plasticity loss, wherein a neural network trained in an online fashion displays a degraded ability to fit new tasks. This problem has been extensively studied in both supervised learning and off-policy reinforcement learning (RL), where a number of remedies have been proposed. Still, plasticity loss has received less attention in the on-policy deep RL setting. Here we perform an extensive set of experiments examining plasticity loss and a variety of mitigation methods in on-policy deep RL. We demonstrate that plasticity loss is pervasive under domain shift in this regime, and that a number of methods developed to resolve it in other settings fail, sometimes even performing worse than applying no intervention at all. In contrast, we find that a class of ``regenerative'' methods are able to consistently mitigate plasticity loss in a variety of contexts, including in gridworld tasks and more challenging environments like Montezuma's Revenge and ProcGen. | A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning | [
"Arthur Juliani",
"Jordan T. Ash"
] | NeurIPS.cc/2024/Conference | 2405.19153 | [
"https://github.com/awjuliani/deep-rl-plasticity"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=Mrs9a1XQAp | @inproceedings{
foerster2024beyond,
title={Beyond Slow Signs in High-fidelity Model Extraction},
author={Hanna Foerster and Robert D. Mullins and Ilia Shumailov and Jamie Hayes},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Mrs9a1XQAp}
} | Deep neural networks, costly to train and rich in intellectual property value, are
increasingly threatened by model extraction attacks that compromise their confiden-
tiality. Previous attacks have succeeded in reverse-engineering model parameters
up to a precision of float64 for models trained on random data with at most three
hidden layers using cryptanalytical techniques. However, the process was identified
to be very time consuming and not feasible for larger and deeper models trained on
standard benchmarks. Our study evaluates the feasibility of parameter extraction
methods of Carlini et al. [1] further enhanced by Canales-Martínez et al. [2] for
models trained on standard benchmarks. We introduce a unified codebase that
integrates previous methods and reveal that computational tools can significantly
influence performance. We develop further optimisations to the end-to-end attack
and improve the efficiency of extracting weight signs by up to 14.8 times com-
pared to former methods through the identification of easier and harder to extract
neurons. Contrary to prior assumptions, we identify extraction of weights, not
extraction of weight signs, as the critical bottleneck. With our improvements, a
16,721 parameter model with 2 hidden layers trained on MNIST is extracted within
only 98 minutes compared to at least 150 minutes previously. Finally, addressing
methodological deficiencies observed in previous studies, we propose new ways of
robust benchmarking for future model extraction attacks. | Beyond Slow Signs in High-fidelity Model Extraction | [
"Hanna Foerster",
"Robert D. Mullins",
"Ilia Shumailov",
"Jamie Hayes"
] | NeurIPS.cc/2024/Conference | 2406.10011 | [
"https://github.com/hannafoe/cryptanalytical-extraction"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Mqx2gquLk0 | @inproceedings{
nguyen-tang2024learning,
title={Learning in Markov Games with Adaptive Adversaries: Policy Regret, Fundamental Barriers, and Efficient Algorithms},
author={Thanh Nguyen-Tang and Raman Arora},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Mqx2gquLk0}
} | We study learning in a dynamically evolving environment modeled as a Markov game between a learner and a strategic opponent that can adapt to the learner's strategies. While most existing works in Markov games focus on external regret as the learning objective, external regret becomes inadequate when the adversaries are adaptive. In this work, we focus on \emph{policy regret} -- a counterfactual notion that aims to compete with the return that would have been attained if the learner had followed the best fixed sequence of policy, in hindsight. We show that if the opponent has unbounded memory or if it is non-stationary, then sample-efficient learning is not possible. For memory-bounded and stationary, we show that learning is still statistically hard if the set of feasible strategies for the learner is exponentially large. To guarantee learnability, we introduce a new notion of \emph{consistent} adaptive adversaries, wherein, the adversary responds similarly to similar strategies of the learner. We provide algorithms that achieve $\sqrt{T}$ policy regret against memory-bounded, stationary, and consistent adversaries. | Learning in Markov Games with Adaptive Adversaries: Policy Regret, Fundamental Barriers, and Efficient Algorithms | [
"Thanh Nguyen-Tang",
"Raman Arora"
] | NeurIPS.cc/2024/Conference | 2411.00707 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MqeCU0tXAY | @inproceedings{
yu2024clipceil,
title={{CLIPCEIL}: Domain Generalization through {CLIP} via Channel rEfinement and Image-text aLignment},
author={Xi Yu and Shinjae Yoo and Yuewei Lin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MqeCU0tXAY}
} | Domain generalization (DG) is a fundamental yet challenging topic in machine learning. Recently, the remarkable zero-shot capabilities of the large pre-trained vision-language model (e.g., CLIP) have made it popular for various downstream tasks. However, the effectiveness of this capacity often degrades when there are shifts in data distribution during testing compared to the training data. In this paper, we propose a novel method, known as CLIPCEIL, a model that utilizes Channel rEfinement and Image-text aLignment to facilitate the CLIP to the inaccessible $\textit{out-of-distribution}$ test datasets that exhibit domain shifts. Specifically, we refine the feature channels in the visual domain to ensure they contain domain-invariant and class-relevant features by using a lightweight adapter. This is achieved by minimizing the inter-domain variance while maximizing the inter-class variance. In the meantime, we ensure the image-text alignment by aligning text embeddings of the class descriptions and their corresponding image embedding while further removing the domain-specific features. Moreover, our model integrates multi-scale CLIP features by utilizing a self-attention fusion module, technically implemented through one Transformer layer. Extensive experiments on five widely used benchmark datasets demonstrate that CLIPCEIL outperforms the existing state-of-the-art methods. The source code is available at \url{https://github.com/yuxi120407/CLIPCEIL}. | CLIPCEIL: Domain Generalization through CLIP via Channel rEfinement and Image-text aLignment | [
"Xi Yu",
"Shinjae Yoo",
"Yuewei Lin"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MocRdX0n7B | @inproceedings{
mingbohong2024you,
title={You Only Look Around: Learning Illumination-Invariant Feature for Low-light Object Detection},
author={MingboHong and Shen Cheng and Haibin Huang and Haoqiang Fan and Shuaicheng Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MocRdX0n7B}
} | In this paper, we introduce YOLA, a novel framework for object detection in low-light scenarios. Unlike previous works, we propose to tackle this challenging problem from the perspective of feature learning. Specifically, we propose to learn illumination-invariant features through the Lambertian image formation model. We observe that, under the Lambertian assumption, it is feasible to approximate illumination-invariant feature maps by exploiting the interrelationships between neighboring color channels and spatially adjacent pixels. By incorporating additional constraints, these relationships can be characterized in the form of convolutional kernels, which can be trained in a detection-driven manner within a network. Towards this end, we introduce a novel module dedicated to the extraction of illumination-invariant features from low-light images, which can be easily integrated into existing object detection frameworks. Our empirical findings reveal significant improvements in low-light object detection tasks, as well as promising results in both well-lit and over-lit scenarios. | You Only Look Around: Learning Illumination-Invariant Feature for Low-light Object Detection | [
"MingboHong",
"Shen Cheng",
"Haibin Huang",
"Haoqiang Fan",
"Shuaicheng Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MncgmW8b6q | @inproceedings{
shalyt2024unsupervised,
title={Unsupervised Discovery of Formulas for Mathematical Constants},
author={Michael Shalyt and Uri Seligmann and Itay Beit Halachmi and Ofir David and Rotem Elimelech and Ido Kaminer},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MncgmW8b6q}
} | Ongoing efforts that span over decades show a rise of AI methods for accelerating scientific discovery, yet accelerating discovery in mathematics remains a persistent challenge for AI.
Specifically, AI methods were not effective in creation of formulas for mathematical constants because each such formula must be correct for infinite digits of precision, with 'near-true' formulas providing no insight toward the correct ones. Consequently, formula discovery lacks a clear distance metric needed to guide automated discovery in this realm.
In this work, we propose a systematic methodology for categorization, characterization, and pattern identification of such formulas. The key to our methodology is introducing metrics based on the convergence dynamics of the formulas, rather than on the numerical value of the formula. These metrics enable the first automated clustering of mathematical formulas.
We demonstrate this methodology on Polynomial Continued Fraction formulas, which are ubiquitous in their intrinsic connections to mathematical constants, and generalize many mathematical functions and structures.
We test our methodology on a set of 1,768,900 such formulas, identifying many known formulas for mathematical constants, and discover previously unknown formulas for $\pi$, $\ln(2)$, Gauss', and Lemniscate's constants. The uncovered patterns enable a direct generalization of individual formulas to infinite families, unveiling rich mathematical structures.
This success paves the way towards a generative model that creates formulas fulfilling specified mathematical properties, accelerating the rate of discovery of useful formulas. | Unsupervised Discovery of Formulas for Mathematical Constants | [
"Michael Shalyt",
"Uri Seligmann",
"Itay Beit Halachmi",
"Ofir David",
"Rotem Elimelech",
"Ido Kaminer"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Mmcy1p15Hc | @inproceedings{
tang2024intrinsic,
title={Intrinsic Robustness of Prophet Inequality to Strategic Reward Signaling},
author={Wei Tang and Haifeng Xu and Ruimin Zhang and Derek Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Mmcy1p15Hc}
} | Prophet inequality concerns a basic optimal stopping problem and states that simple threshold stopping policies --- i.e., accepting the first reward larger than a certain threshold --- can achieve tight $\frac{1}{2}$-approximation to the optimal prophet value. Motivated by its economic applications, this paper studies the robustness of this approximation to natural strategic manipulations in which each random reward is associated with a self-interested player who may selectively reveal his realized reward to the searcher in order to maximize his probability of being selected.
We say a threshold policy is $\alpha$(-strategically)-robust if it (a) achieves the $\alpha$-approximation to the prophet value for strategic players; and (b) meanwhile remains a $\frac{1}{2}$-approximation in the standard non-strategic setting.
Starting with a characterization of each player's optimal information revealing strategy, we demonstrate the intrinsic robustness of prophet inequalities to strategic reward signaling through the following results:
(1) for arbitrary reward distributions, there is a threshold policy that is $\frac{1-\frac{1}{e}}{2}$-robust, and this ratio is tight;
(2) for i.i.d. reward distributions, there is a threshold policy that is $\frac{1}{2}$-robust, which is tight for the setting;
and (3) for log-concave (but non-identical) reward distributions, the $\frac{1}{2}$-robustness can also be achieved under certain regularity assumptions. | Intrinsic Robustness of Prophet Inequality to Strategic Reward Signaling | [
"Wei Tang",
"Haifeng Xu",
"Ruimin Zhang",
"Derek Zhu"
] | NeurIPS.cc/2024/Conference | 2409.18269 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MlADRQI0Wf | @inproceedings{
wu2024implicit,
title={Implicit Regularization of Decentralized Gradient Descent for Sparse Regression},
author={Tongle Wu and Ying Sun},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MlADRQI0Wf}
} | We consider learning a sparse model from linear measurements taken by a network of agents. Different from existing decentralized methods designed based on the LASSO regression with explicit $\ell_1$ norm regularization, we exploit the implicit regularization of decentralized optimization method applied to an over-parameterized nonconvex least squares formulation without penalization. Our first result shows that despite nonconvexity, if the network connectivity is good, the well-known decentralized gradient descent algorithm (DGD) with small initialization and early stopping can compute the statistically optimal solution. Sufficient conditions on the initialization scale, choice of step size, network connectivity, and stopping time are further provided to achieve convergence. Our result recovers the convergence rate of gradient descent in the centralized setting, showing its tightness.
Based on the analysis of DGD, we further propose a communication-efficient version, termed T-DGD, by truncating the iterates before transmission. In the high signal-to-noise ratio (SNR) regime, we show that T-DGD achieves comparable statistical accuracy to DGD, while the communication cost is logarithmic in the number of parameters. Numerical results are provided to validate the effectiveness of DGD and T-DGD for sparse learning through implicit regularization. | Implicit Regularization of Decentralized Gradient Descent for Sparse Regression | [
"Tongle Wu",
"Ying Sun"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Mktgayam7U | @inproceedings{
long2024scalable,
title={Scalable Kernel Inverse Optimization},
author={Youyuan Long and Tolga Ok and Pedro Zattoni Scroccaro and Peyman Mohajerin Esfahani},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Mktgayam7U}
} | Inverse Optimization (IO) is a framework for learning the unknown objective function of an expert decision-maker from a past dataset.
In this paper, we extend the hypothesis class of IO objective functions to a reproducing kernel Hilbert space (RKHS), thereby enhancing feature representation to an infinite-dimensional space.
We demonstrate that a variant of the representer theorem holds for a specific training loss, allowing the reformulation of the problem as a finite-dimensional convex optimization program.
To address scalability issues commonly associated with kernel methods, we propose the Sequential Selection Optimization (SSO) algorithm to efficiently train the proposed Kernel Inverse Optimization (KIO) model.
Finally, we validate the generalization capabilities of the proposed KIO model and the effectiveness of the SSO algorithm through learning-from-demonstration tasks on the MuJoCo benchmark. | Scalable Kernel Inverse Optimization | [
"Youyuan Long",
"Tolga Ok",
"Pedro Zattoni Scroccaro",
"Peyman Mohajerin Esfahani"
] | NeurIPS.cc/2024/Conference | 2410.23952 | [
"https://github.com/Longyouyuan/Scalable-Kernel-Inverse-Optimization"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MjD9Y05Q6i | @inproceedings{
huang2024lgcav,
title={{LG}-{CAV}: Train Any Concept Activation Vector with Language Guidance},
author={Qihan Huang and Jie Song and Mengqi Xue and Haofei Zhang and Bingde Hu and Huiqiong Wang and Hao Jiang and Xingen Wang and Mingli Song},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MjD9Y05Q6i}
} | Concept activation vector (CAV) has attracted broad research interest in explainable AI, by elegantly attributing model predictions to specific concepts. However, the training of CAV often necessitates a large number of high-quality images, which are expensive to curate and thus limited to a predefined set of concepts. To address this issue, we propose Language-Guided CAV (LG-CAV) to harness the abundant concept knowledge within the certain pre-trained vision-language models (e.g., CLIP). This method allows training any CAV without labeled data, by utilizing the corresponding concept descriptions as guidance. To bridge the gap between vision-language model and the target model, we calculate the activation values of concept descriptions on a common pool of images (probe images) with vision-language model and utilize them as language guidance to train the LG-CAV. Furthermore, after training high-quality LG-CAVs related to all the predicted classes in the target model, we propose the activation sample reweighting (ASR), serving as a model correction technique, to improve the performance of the target model in return. Experiments on four datasets across nine architectures demonstrate that LG-CAV achieves significantly superior quality to previous CAV methods given any concept, and our model correction method achieves state-of-the-art performance compared to existing concept-based methods. Our code is available at https://github.com/hqhQAQ/LG-CAV. | LG-CAV: Train Any Concept Activation Vector with Language Guidance | [
"Qihan Huang",
"Jie Song",
"Mengqi Xue",
"Haofei Zhang",
"Bingde Hu",
"Huiqiong Wang",
"Hao Jiang",
"Xingen Wang",
"Mingli Song"
] | NeurIPS.cc/2024/Conference | 2410.10308 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MihOCXte41 | @inproceedings{
chen2024edt,
title={{EDT}: An Efficient Diffusion Transformer Framework Inspired by Human-like Sketching},
author={Xinwang Chen and Ning Liu and Yichen Zhu and Feifei Feng and Jian Tang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MihOCXte41}
} | Transformer-based Diffusion Probabilistic Models (DPMs) have shown more potential than CNN-based DPMs, yet their extensive computational requirements hinder widespread practical applications. To reduce the computation budget of transformer-based DPMs, this work proposes the Efficient Diffusion Transformer (EDT) framework. This framework includes a lightweight-design diffusion model architecture, and a training-free Attention Modulation Matrix and its alternation arrangement in EDT inspired by human-like sketching. Additionally, we propose a token relation-enhanced masking training strategy tailored explicitly for EDT to augment its token relation learning capability. Our extensive experiments demonstrate the efficacy of EDT. The EDT framework reduces training and inference costs and surpasses existing transformer-based diffusion models in image synthesis performance, thereby achieving a significant overall enhancement. With lower FID, EDT-S, EDT-B, and EDT-XL attained speed-ups of 3.93x, 2.84x, and 1.92x respectively in the training phase, and 2.29x, 2.29x, and 2.22x respectively in inference, compared to the corresponding sizes of MDTv2. Our code is available at https://github.com/xinwangChen/EDT. | EDT: An Efficient Diffusion Transformer Framework Inspired by Human-like Sketching | [
"Xinwang Chen",
"Ning Liu",
"Yichen Zhu",
"Feifei Feng",
"Jian Tang"
] | NeurIPS.cc/2024/Conference | 2410.23788 | [
"https://github.com/xinwangchen/edt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Mi853QaJx6 | @inproceedings{
cao2024on,
title={On the Worst Prompt Performance of Large Language Models},
author={Bowen Cao and Deng Cai and Zhisong Zhang and Yuexian Zou and Wai Lam},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Mi853QaJx6}
} | The performance of large language models (LLMs) is acutely sensitive to the phrasing of prompts, which raises significant concerns about their reliability in real-world scenarios. Existing studies often divide prompts into task-level instructions and case-level inputs and primarily focus on evaluating and improving robustness against variations in tasks-level instructions. However, this setup fails to fully address the diversity of real-world user queries and assumes the existence of task-specific datasets. To address these limitations, we introduce RobustAlpacaEval, a new benchmark that consists of semantically equivalent case-level queries and emphasizes the importance of using the worst prompt performance to gauge the lower bound of model performance. Extensive experiments on RobustAlpacaEval with ChatGPT and six open-source LLMs from the Llama, Mistral, and Gemma families uncover substantial variability in model performance; for instance, a difference of 45.48% between the worst and best performance for the Llama-2-70B-chat model, with its worst performance dipping as low as 9.38%. We further illustrate the difficulty in identifying the worst prompt from both model-agnostic and model-dependent perspectives, emphasizing the absence of a shortcut to characterize the worst prompt. We also attempt to enhance the worst prompt performance using existing prompt engineering and prompt consistency methods, but find that their impact is limited. These findings underscore the need to create more resilient LLMs that can maintain high performance across diverse prompts. | On the Worst Prompt Performance of Large Language Models | [
"Bowen Cao",
"Deng Cai",
"Zhisong Zhang",
"Yuexian Zou",
"Wai Lam"
] | NeurIPS.cc/2024/Conference | 2406.10248 | [
"https://github.com/bwcao/robustalpacaeval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MhWaMOkoN3 | @inproceedings{
ghane2024universality,
title={Universality in Transfer Learning for Linear Models},
author={Reza Ghane and Danil Akhtiamov and Babak Hassibi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MhWaMOkoN3}
} | Transfer learning is an attractive framework for problems where there is a paucity of data, or where data collection is costly. One common approach to transfer learning is referred to as "model-based", and involves using a model that is pretrained on samples from a source distribution, which is easier to acquire, and then fine-tuning the model on a few samples from the target distribution. The hope is that, if the source and target distributions are "close", then the fine-tuned model will perform well on the target distribution even though it has seen only a few samples from it. In this work, we study the problem of transfer learning in linear models for both regression and binary classification. In particular, we consider the use of stochastic gradient descent (SGD) on a linear model initialized with pretrained weights and using a small training data set from the target distribution. In the asymptotic regime of large models, we provide an exact and rigorous analysis and relate the generalization errors (in regression) and classification errors (in binary classification) for the pretrained and fine-tuned models. In particular, we give conditions under which the fine-tuned model outperforms the pretrained one. An important aspect of our work is that all the results are "universal", in the sense that they depend only on the first and second order statistics of the target distribution. They thus extend well beyond the standard Gaussian assumptions commonly made in the literature. | Universality in Transfer Learning for Linear Models | [
"Reza Ghane",
"Danil Akhtiamov",
"Babak Hassibi"
] | NeurIPS.cc/2024/Conference | 2410.02164 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MfGRUVFtn9 | @inproceedings{
lin2024unveiling,
title={Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness},
author={Weilin Lin and Li Liu and Shaokui Wei and Jianze Li and Hui Xiong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MfGRUVFtn9}
} | The security threat of backdoor attacks is a central concern for deep neural networks (DNNs). Recently, without poisoned data, unlearning models with clean data and then learning a pruning mask have contributed to backdoor defense. Additionally, vanilla fine-tuning with those clean data can help recover the lost clean accuracy. However, the behavior of clean unlearning is still under-explored, and vanilla fine-tuning unintentionally induces back the backdoor effect. In this work, we first investigate model unlearning from the perspective of weight changes and gradient norms, and find two interesting observations in the backdoored model: 1) the weight changes between poison and clean unlearning are positively correlated, making it possible for us to identify the backdoored-related neurons without using poisoned data; 2) the neurons of the backdoored model are more active (larger changes in gradient norm) than those in the clean model, suggesting the need to suppress the gradient norm during fine-tuning. Then, we propose an effective two-stage defense method. In the first stage, an efficient Neuron Weight Change (NWC)-based Backdoor Reinitialization is proposed based on observation 1). In the second stage, based on observation 2), we design an Activeness-Aware Fine-Tuning to replace the vanilla fine-tuning. Extensive experiments, involving eight backdoor attacks on three benchmark datasets, demonstrate the superior performance of our proposed method compared to recent state-of-the-art backdoor defense approaches. | Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness | [
"Weilin Lin",
"Li Liu",
"Shaokui Wei",
"Jianze Li",
"Hui Xiong"
] | NeurIPS.cc/2024/Conference | 2405.20291 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MelYGfpy4x | @inproceedings{
yang2024robust,
title={Robust group and simultaneous inferences for high-dimensional single index model},
author={Weichao Yang and Hongwei Shi and Xu Guo and Changliang Zou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MelYGfpy4x}
} | The high-dimensional single index model (SIM), which assumes that the response is independent of the predictors given a linear combination of predictors, has drawn attention due to its flexibility and interpretability, but its efficiency is adversely affected by outlying observations and heavy-tailed distributions. This paper introduces a robust procedure by recasting the SIM into a pseudo-linear model with transformed responses. It relaxes the distributional conditions on random errors from sub-Gaussian to more general distributions and thus it is robust with substantial efficiency gain for heavy-tailed random errors. Under this paradigm, we provide asymptotically honest group inference procedures based on the idea of orthogonalization, which enjoys the feature that it does not require the zero and nonzero coefficients to be well-separated. Asymptotic null distribution and bootstrap implementation are both established. Moreover, we develop a multiple testing procedure for determining if the individual coefficients are relevant simultaneously, and show that it is able to control the false discovery rate asymptotically. Numerical results indicate that the new procedures can be highly competitive among existing methods, especially for heavy-tailed errors. | Robust group and simultaneous inferences for high-dimensional single index model | [
"Weichao Yang",
"Hongwei Shi",
"Xu Guo",
"Changliang Zou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MeCC0Is5hs | @inproceedings{
zhussip2024a,
title={A Modular Conditional Diffusion Framework for Image Reconstruction},
author={Magauiya Zhussip and Iaroslav Sergeevich Koshelev and Stamatios Lefkimmiatis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MeCC0Is5hs}
} | Diffusion Probabilistic Models (DPMs) have been recently utilized to deal with various blind image restoration (IR) tasks, where they have demonstrated outstanding performance in terms of perceptual quality. However, the task-specific nature of existing solutions and the excessive computational costs related to their training, make such models impractical and challenging to use for different IR tasks than those that were initially trained for. This hinders their wider adoption especially by those who lack access to powerful computational resources and vast amounts of training data. In this work we aim to address the above issues and enable the successful adoption of DPMs in practical IR-related applications. Towards this goal, we propose a modular diffusion probabilistic IR framework (DP-IR), which allows us to combine the performance benefits of existing pre-trained state-of-the-art IR networks and generative DPMs, while it requires only the additional training of a small module (0.7M params) related to the particular IR task of interest. Moreover, the architecture of our proposed framework allows us to employ a sampling strategy that leads to at least four times reduction of neural function evaluations without any performance loss, while it can also be combined with existing acceleration techniques (e.g. DDIM). We evaluate our model on four benchmarks for the tasks of burst JDD-SR, dynamic scene deblurring, and super-resolution. Our method outperforms existing approaches in terms of perceptual quality while retaining a competitive performance in relation to fidelity metrics. | A Modular Conditional Diffusion Framework for Image Reconstruction | [
"Magauiya Zhussip",
"Iaroslav Sergeevich Koshelev",
"Stamatios Lefkimmiatis"
] | NeurIPS.cc/2024/Conference | 2411.05993 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Me5esZTRqW | @inproceedings{
xu2024covariate,
title={Covariate Shift Corrected Conditional Randomization Test},
author={Bowen Xu and Yiwen Huang and Chuan Hong and Shuangning Li and Molei Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Me5esZTRqW}
} | Conditional independence tests are crucial across various disciplines in determining the independence of an outcome variable $Y$ from a treatment variable $X$, conditioning on a set of confounders $Z$. The Conditional Randomization Test (CRT) offers a powerful framework for such testing by assuming known distributions of $X \mid Z$; it controls the Type-I error exactly, allowing for the use of flexible, black-box test statistics. In practice, testing for conditional independence often involves using data from a source population to draw conclusions about a target population. This can be challenging due to covariate shift---differences in the distribution of $X$, $Z$, and surrogate variables, which can affect the conditional distribution of $Y \mid X, Z$---rendering traditional CRT approaches invalid. To address this issue, we propose a novel Covariate Shift Corrected Pearson Chi-squared Conditional Randomization (csPCR) test. This test adapts to covariate shifts by integrating importance weights and employing the control variates method to reduce variance in the test statistics and thus enhance power. Theoretically, we establish that the csPCR test controls the Type-I error asymptotically. Empirically, through simulation studies, we demonstrate that our method not only maintains control over Type-I errors but also exhibits superior power, confirming its efficacy and practical utility in real-world scenarios where covariate shifts are prevalent. Finally, we apply our methodology to a real-world dataset to assess the impact of a COVID-19 treatment on the 90-day mortality rate among patients. | Covariate Shift Corrected Conditional Randomization Test | [
"Bowen Xu",
"Yiwen Huang",
"Chuan Hong",
"Shuangning Li",
"Molei Liu"
] | NeurIPS.cc/2024/Conference | 2405.19231 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MdmzAezNHq | @inproceedings{
lee2024differential,
title={Differential Privacy in Scalable General Kernel Learning via \$K\$-means Nystr\{{\textbackslash}''o\}m Random Features},
author={Bonwoo Lee and Jeongyoun Ahn and Cheolwoo Park},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MdmzAezNHq}
} | As the volume of data invested in statistical learning increases and concerns regarding privacy grow, the privacy leakage issue has drawn significant attention. Differential privacy has emerged as a widely accepted concept capable of mitigating privacy concerns, and numerous differentially private (DP) versions of machine learning algorithms have been developed. However, existing works on DP kernel learning algorithms have exhibited practical limitations, including scalability, restricted choice of kernels, or dependence on test data availability. We propose DP scalable kernel empirical risk minimization (ERM) algorithms and a DP kernel mean embedding (KME) release algorithm suitable for general kernels. Our approaches address the shortcomings of previous algorithms by employing Nyström methods, classical techniques in non-private scalable kernel learning. These methods provide data-dependent low-rank approximations of the kernel matrix for general kernels in a DP manner. We present excess empirical risk bounds and computational complexities for the scalable kernel DP ERM, KME algorithms, contrasting them with established methodologies. Furthermore, we develop a private data-generating algorithm capable of learning diverse kernel models. We conduct experiments to demonstrate the performance of our algorithms, comparing them with existing methods to highlight their superiority. | Differential Privacy in Scalable General Kernel Learning via K-means Nyström Random Features | [
"Bonwoo Lee",
"Jeongyoun Ahn",
"Cheolwoo Park"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=McrzOo0hwr | @inproceedings{
lv2024theoretical,
title={Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning},
author={Yiqin Lv and Cheems Wang and Dong Liang and Zheng Xie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=McrzOo0hwr}
} | Meta learning is a promising paradigm in the era of large models and
task distributional robustness has become an indispensable consideration in real-world scenarios.
Recent advances have examined the effectiveness of tail task risk minimization in fast adaptation robustness improvement \citep{wang2023simple}.
This work contributes to more theoretical investigations and practical enhancements in the field.
Specifically, we reduce the distributionally robust strategy to a max-min optimization problem, constitute the Stackelberg equilibrium as the solution concept, and estimate the convergence rate.
In the presence of tail risk, we further derive the generalization bound, establish connections with estimated quantiles, and practically improve the studied strategy.
Accordingly, extensive evaluations demonstrate the significance of our proposal in boosting robustness. | Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning | [
"Yiqin Lv",
"Cheems Wang",
"Dong Liang",
"Zheng Xie"
] | NeurIPS.cc/2024/Conference | 2410.22788 | [
"https://github.com/lvyiqin/DRMAML"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MbZuh8L0Xg | @inproceedings{
wei2024diffphycon,
title={DiffPhyCon: A Generative Approach to Control Complex Physical Systems},
author={Long Wei and Peiyan Hu and Ruiqi Feng and Haodong Feng and Yixuan Du and Tao Zhang and Rui Wang and Yue Wang and Zhi-Ming Ma and Tailin Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MbZuh8L0Xg}
} | Controlling the evolution of complex physical systems is a fundamental task across science and engineering.
Classical techniques suffer from limited applicability or huge computational costs. On the other hand, recent deep learning and reinforcement learning-based approaches often struggle to optimize long-term control sequences under the constraints of system dynamics. In this work, we introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem. DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives across the entire trajectory and control sequence. Thus, it can explore globally and plan near-optimal control sequences. Moreover, we enhance DiffPhyCon with prior reweighting, enabling the discovery of control sequences that significantly deviate from the training distribution. We test our method on three tasks: 1D Burgers' equation, 2D jellyfish movement control, and 2D high-dimensional smoke control, where our generated jellyfish dataset is released as a benchmark for complex physical system control research. Our method outperforms widely applied classical approaches and state-of-the-art deep learning and reinforcement learning methods. Notably, DiffPhyCon unveils an intriguing fast-close-slow-open pattern observed in the jellyfish, aligning with established findings in the field of fluid dynamics. The project website, jellyfish dataset, and code can be found at https://github.com/AI4Science-WestlakeU/diffphycon. | DiffPhyCon: A Generative Approach to Control Complex Physical Systems | [
"Long Wei",
"Peiyan Hu",
"Ruiqi Feng",
"Haodong Feng",
"Yixuan Du",
"Tao Zhang",
"Rui Wang",
"Yue Wang",
"Zhi-Ming Ma",
"Tailin Wu"
] | NeurIPS.cc/2024/Conference | 2407.06494 | [
"https://github.com/ai4science-westlakeu/diffphycon"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MbEB5aKmMK | @inproceedings{
wang2024online,
title={Online Composite Optimization Between Stochastic and Adversarial Environments},
author={Yibo Wang and Sijia Chen and Wei Jiang and Wenhao Yang and Yuanyu Wan and Lijun Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MbEB5aKmMK}
} | We study online composite optimization under the Stochastically Extended Adversarial (SEA) model. Specifically, each loss function consists of two parts: a fixed non-smooth and convex regularizer, and a time-varying function which can be chosen either stochastically, adversarially, or in a manner that interpolates between the two extremes. In this setting, we show that for smooth and convex time-varying functions, optimistic composite mirror descent (OptCMD) can obtain an $\mathcal{O}(\sqrt{\sigma_{1:T}^2} + \sqrt{\Sigma_{1:T}^2})$ regret bound, where $\sigma_{1:T}^2$ and $\Sigma_{1:T}^2$ denote the cumulative stochastic variance and the cumulative adversarial variation of time-varying functions, respectively. For smooth and strongly convex time-varying functions, we establish an $\mathcal{O}((\sigma_{\max}^2 + \Sigma_{\max}^2)\log(\sigma_{1:T}^2 + \Sigma_{1:T}^2))$ regret bound, where $\sigma_{\max}^2$ and $\Sigma_{\max}^2$ denote the maximal stochastic variance and the maximal adversarial variation, respectively. For smooth and exp-concave time-varying functions, we achieve an $\mathcal{O}(d \log (\sigma_{1:T}^2 + \Sigma_{1:T}^2))$ bound where $d$ denotes the dimensionality. Moreover, to deal with the unknown function type in practical problems, we propose a multi-level \textit{universal} algorithm that is able to achieve the desirable bounds for three types of time-varying functions simultaneously. It should be noticed that all our findings match existing bounds for the SEA model without the regularizer, which implies that there is \textit{no price} in regret bounds for the benefits gained from the regularizer. | Online Composite Optimization Between Stochastic and Adversarial Environments | [
"Yibo Wang",
"Sijia Chen",
"Wei Jiang",
"Wenhao Yang",
"Yuanyu Wan",
"Lijun Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MaDykgj4Ru | @inproceedings{
wang2024blob,
title={{BL}oB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models},
author={Yibin Wang and Haizhou Shi and Ligong Han and Dimitris N. Metaxas and Hao Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MaDykgj4Ru}
} | Large Language Models (LLMs) often suffer from overconfidence during inference, particularly when adapted to downstream domain-specific tasks with limited data. Previous work addresses this issue by employing approximate Bayesian estimation after the LLMs are trained, enabling them to quantify uncertainty. However, such post-training approaches' performance is severely limited by the parameters learned during training. In this paper, we go beyond post-training Bayesianization and propose Bayesian Low-Rank Adaptation by Backpropagation (BLoB), an algorithm that continuously and jointly adjusts both the mean and covariance of LLM parameters throughout the whole fine-tuning process. Our empirical results verify the effectiveness of BLoB in terms of generalization and uncertainty estimation, when evaluated on both in-distribution and out-of-distribution data. | BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models | [
"Yibin Wang",
"Haizhou Shi",
"Ligong Han",
"Dimitris N. Metaxas",
"Hao Wang"
] | NeurIPS.cc/2024/Conference | 2406.11675 | [
"https://github.com/wang-ml-lab/bayesian-peft"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Ma0993KZlq | @inproceedings{
kontonis2024active,
title={Active Classification with Few Queries under Misspecification},
author={Vasilis Kontonis and Mingchen Ma and Christos Tzamos},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Ma0993KZlq}
} | We study pool-based active learning, where a learner has a large pool $S$ of unlabeled examples and can adaptively ask a labeler questions to learn these labels. The goal of the learner is to output a labeling for $S$ that can compete with the best hypothesis from a given hypothesis class $\mathcal{H}$. We focus on halfspace learning, one of the most important problems in active learning.
It is well known that in the standard active learning model, learning the labels of an arbitrary pool of examples labeled by some halfspace up to error $\epsilon$ requires at least $\Omega(1/\epsilon)$ queries. To overcome this difficulty, previous work designs simple but powerful query languages to achieve $O(\log(1/\epsilon))$ query complexity, but only focuses on the realizable setting where data are perfectly labeled by some halfspace.
However, when labels are noisy, such queries are too fragile and lead to high query complexity even under the simple random classification noise model.
In this work, we propose a new query language called threshold statistical queries and study their power for learning under various noise models. Our main algorithmic result is the first query-efficient algorithm for learning halfspaces under the popular Massart noise model. With an arbitrary dataset corrupted with Massart noise at noise rate $\eta$, our algorithm uses only $\mathrm{polylog(1/\epsilon)}$ threshold statistical queries and computes an $(\eta + \epsilon)$-accurate labeling in polynomial time. For the harder case of agnostic noise, we show that it is impossible to beat $O(1/\epsilon)$ query complexity even for the much simpler problem of learning singleton functions (and thus for learning halfspaces) using a reduction from agnostic distributed learning. | Active Classification with Few Queries under Misspecification | [
"Vasilis Kontonis",
"Mingchen Ma",
"Christos Tzamos"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=MZ47wPr6C3 | @inproceedings{
li2024on,
title={On Sparse Canonical Correlation Analysis},
author={Yongchun Li and Santanu Dey and Weijun Xie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MZ47wPr6C3}
} | The classical Canonical Correlation Analysis (CCA) identifies the correlations between two sets of multivariate variables based on their covariance, which has been widely applied in diverse fields such as computer vision, natural language processing, and speech analysis. Despite its popularity, CCA can encounter challenges in explaining correlations between two variable sets within high-dimensional data contexts. Thus, this paper studies Sparse Canonical Correlation Analysis (SCCA) that enhances the interpretability of CCA. We first show that SCCA generalizes three well-known sparse optimization problems, sparse PCA, sparse SVD, and sparse regression, which are all classified as NP-hard problems. This result motivates us to develop strong formulations and efficient algorithms. Our main contributions include (i) the introduction of a combinatorial formulation that captures the essence of SCCA and allows the development of exact and approximation algorithms; (ii) the establishment of the complexity results for two low-rank special cases of SCCA; and (iii) the derivation of an equivalent mixed-integer semidefinite programming model that facilitates a specialized branch-and-cut algorithm with analytical cuts. The effectiveness of our proposed formulations and algorithms is validated through numerical experiments. | On Sparse Canonical Correlation Analysis | [
"Yongchun Li",
"Santanu Dey",
"Weijun Xie"
] | NeurIPS.cc/2024/Conference | 2401.00308 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MYI443zCvv | @inproceedings{
park2024deprune,
title={{DEP}rune: Depth-wise Separable Convolution Pruning for Maximizing {GPU} Parallelism},
author={Cheonjun Park and Mincheol Park and Hyunchan Moon and Myung Kuk Yoon and Seokjin Go and Suhyun Kim and Won Woo Ro},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MYI443zCvv}
} | Depth-wise Separable Convolution (DSConv) has a powerful representation even with fewer parameters and computation, leading to its adoption by almost all of the state-of-the-art CNN models.
DSConv models are already compact making it hard to apply pruning, and there are few previous pruning techniques that target depth-wise convolution (DW-conv).
In this paper, we present Depth-wise Separable Convolution Pruning (DEPrune), a novel pruning method applied to both point-wise and depth-wise convolutions.
DEPrune is optimized by analyzing the computation of DSConv on GPUs.
DEPrune employs a fine-grained pruning approach, yet it achieves the structured sparsity typically absent in fine-grained pruning, enabling practical hardware acceleration.
Moreover, this method maintains a high pruning ratio without causing any accuracy drop.
We additionally represent techniques that further enhance DEPrune performance: 1) balanced workload tuning (BWT), and 2) hardware-aware sparsity recalibration (HSR).
Experiment results show that DEPrune achieves up to $3.74\times$ practical speedup in DSConv inference on GPUs while maintaining the accuracy of EfficientNet-B0 on ImageNet. | DEPrune: Depth-wise Separable Convolution Pruning for Maximizing GPU Parallelism | [
"Cheonjun Park",
"Mincheol Park",
"Hyunchan Moon",
"Myung Kuk Yoon",
"Seokjin Go",
"Suhyun Kim",
"Won Woo Ro"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MXzr10iX2d | @inproceedings{
fu2024topologic,
title={TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes},
author={Yanping Fu and Wenbin Liao and Xinyuan Liu and Hang Xu and Yike Ma and Yucheng Zhang and Feng Dai},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MXzr10iX2d}
} | As an emerging task that integrates perception and reasoning, topology reasoning in autonomous driving scenes has recently garnered widespread attention. However, existing work often emphasizes "perception over reasoning": they typically boost reasoning performance by enhancing the perception of lanes and directly adopt vanilla MLPs to learn lane topology from lane query. This paradigm overlooks the geometric features intrinsic to the lanes themselves and are prone to being influenced by inherent endpoint shifts in lane detection.
To tackle this issue, we propose an interpretable method for lane topology reasoning based on lane geometric distance and lane query similarity, named TopoLogic. This method mitigates the impact of endpoint shifts in geometric space, and introduces explicit similarity calculation in semantic space as a complement. By integrating results from both spaces, our methods provides more comprehensive information for lane topology. Ultimately, our approach significantly outperforms the existing state-of-the-art methods on the mainstream benchmark OpenLane-V2 (23.9 v.s. 10.9 in TOP$_{ll}$ and 44.1 v.s. 39.8 in OLS on subsetA). Additionally, our proposed geometric distance topology reasoning method can be incorporated into well-trained models without re-training, significantly enhancing the performance of lane topology reasoning. The code is released at https://github.com/Franpin/TopoLogic. | TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes | [
"Yanping Fu",
"Wenbin Liao",
"Xinyuan Liu",
"Hang Xu",
"Yike Ma",
"Yucheng Zhang",
"Feng Dai"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MXze4H7opg | @inproceedings{
han2024sltrain,
title={{SLT}rain: a sparse plus low rank approach for parameter and memory efficient pretraining},
author={Andi Han and Jiaxiang Li and Wei Huang and Mingyi Hong and Akiko Takeda and Pratik Jawanpuria and Bamdev Mishra},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MXze4H7opg}
} | Large language models (LLMs) have shown impressive capabilities across various tasks. However, training LLMs from scratch requires significant computational power and extensive memory capacity. Recent studies have explored low-rank structures on weights for efficient fine-tuning in terms of parameters and memory, either through low-rank adaptation or factorization. While effective for fine-tuning, low-rank structures are generally less suitable for pretraining because they restrict parameters to a low-dimensional subspace. In this work, we propose to parameterize the weights as a sum of low-rank and sparse matrices for pretraining, which we call SLTrain. The low-rank component is learned via matrix factorization, while for the sparse component, we employ a simple strategy of uniformly selecting the sparsity support at random and learning only the non-zero entries with the fixed support. While being simple, the random fixed-support sparse learning strategy significantly enhances pretraining when combined with low-rank learning. Our results show that SLTrain adds minimal extra parameters and memory costs compared to pretraining with low-rank parameterization, yet achieves substantially better performance, which is comparable to full-rank training. Remarkably, when combined with quantization and per-layer updates, SLTrain can reduce memory requirements by up to 73% when pretraining the LLaMA 7B model. | SLTrain: a sparse plus low rank approach for parameter and memory efficient pretraining | [
"Andi Han",
"Jiaxiang Li",
"Wei Huang",
"Mingyi Hong",
"Akiko Takeda",
"Pratik Jawanpuria",
"Bamdev Mishra"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/andyjm3/SLTrain"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MXY0qsGgeO | @inproceedings{
eyring2024reno,
title={Re{NO}: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization},
author={Luca Eyring and Shyamgopal Karthik and Karsten Roth and Alexey Dosovitskiy and Zeynep Akata},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MXY0qsGgeO}
} | Text-to-Image (T2I) models have made significant advancements in recent years, but they still struggle to accurately capture intricate details specified in complex compositional prompts. While fine-tuning T2I models with reward objectives has shown promise, it suffers from "reward hacking" and may not generalize well to unseen prompt distributions. In this work, we propose Reward-based Noise Optimization (ReNO), a novel approach that enhances T2I models at inference by optimizing the initial noise based on the signal from one or multiple human preference reward models. Remarkably, solving this optimization problem with gradient ascent for 50 iterations yields impressive results on four different one-step models across two competitive benchmarks, T2I-CompBench and GenEval. Within a computational budget of 20-50 seconds, ReNO-enhanced one-step models consistently surpass the performance of all current open-source Text-to-Image models. Extensive user studies demonstrate that our model is preferred nearly twice as often compared to the popular SDXL model and is on par with the proprietary Stable Diffusion 3 with 8B parameters. Moreover, given the same computational resources, a ReNO-optimized one-step model outperforms widely-used open-source models such as SDXL and PixArt-alpha, highlighting the efficiency and effectiveness of ReNO in enhancing T2I model performance at inference time. | ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization | [
"Luca Eyring",
"Shyamgopal Karthik",
"Karsten Roth",
"Alexey Dosovitskiy",
"Zeynep Akata"
] | NeurIPS.cc/2024/Conference | 2406.04312 | [
"https://github.com/explainableml/reno"
] | https://huggingface.co/papers/2406.04312 | 1 | 1 | 0 | 5 | [] | [] | [
"fffiloni/ReNO"
] | [] | [] | [
"fffiloni/ReNO"
] | 1 | poster |
null | https://openreview.net/forum?id=MXRO5kukST | @inproceedings{
hong2024sand,
title={{SAND}: Smooth imputation of sparse and noisy functional data with Transformer networks},
author={Ju-Sheng Hong and Junwen Yao and Jonas Mueller and Jane-Ling Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MXRO5kukST}
} | Although the transformer architecture has come to dominate other models for text and image data, its application to irregularly-spaced longitudinal data has been limited. We introduce a variant of the transformer that enables it to more smoothly impute such functional data. We augment the vanilla transformer with a simple module we call SAND (self-attention on derivatives), which naturally encourages smoothness by modeling the sub-derivative of the imputed curve. On the theoretical front, we prove the number of hidden nodes required by a network with SAND to achieve an $\epsilon$ prediction error bound for functional imputation. Extensive experiments over various types of functional data demonstrate that transformers with SAND produce better imputations than both their standard counterparts as well as transformers augmented with alternative approaches to encode the inductive bias of smoothness. SAND also outperforms standard statistical methods for functional imputation like kernel smoothing and PACE. | SAND: Smooth imputation of sparse and noisy functional data with Transformer networks | [
"Ju-Sheng Hong",
"Junwen Yao",
"Jonas Mueller",
"Jane-Ling Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MXOzgjlWDF | @inproceedings{
sehanobish2024structured,
title={Structured Unrestricted-Rank Matrices for Parameter Efficient Finetuning},
author={Arijit Sehanobish and Kumar Avinava Dubey and Krzysztof Marcin Choromanski and Somnath Basu Roy Chowdhury and Deepali Jain and Vikas Sindhwani and Snigdha Chaturvedi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MXOzgjlWDF}
} | Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei at. al 2022). However, fine-tuning these models for downstream tasks is quite expensive due to their large parameter counts. Parameter-efficient fine-tuning (PEFT) approaches have emerged as a viable alternative, allowing us to fine-tune models by updating only a small number of parameters.
In this work, we propose a general framework for parameter efficient fine-tuning (PEFT), based on *structured unrestricted-rank matrices* (SURM) which can serve as a drop-in replacement for popular approaches such as Adapters and LoRA. Unlike other methods like LoRA, SURMs give us more flexibility in finding the right balance between compactness and expressiveness. This is achieved by using *low displacement rank matrices* (LDRMs), which hasn't been used in this context before. SURMs remain competitive with baselines, often providing significant quality improvements while using a smaller parameter budget. SURMs achieve: **5**-**7**% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA and: up to **12x** reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark. | Structured Unrestricted-Rank Matrices for Parameter Efficient Finetuning | [
"Arijit Sehanobish",
"Kumar Avinava Dubey",
"Krzysztof Marcin Choromanski",
"Somnath Basu Roy Chowdhury",
"Deepali Jain",
"Vikas Sindhwani",
"Snigdha Chaturvedi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MWHRxKz4mq | @inproceedings{
yao2024marrying,
title={Marrying Causal Representation Learning with Dynamical Systems for Science},
author={Dingling Yao and Caroline Muller and Francesco Locatello},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MWHRxKz4mq}
} | Causal representation learning promises to extend causal models to hidden causal variables from raw entangled measurements. However, most progress has focused on proving identifiability results in different settings, and we are not aware of any successful real-world application. At the same time, the field of dynamical systems benefited from deep learning and scaled to countless applications but does not allow parameter identification. In this paper, we draw a clear connection between the two and their key assumptions, allowing us to apply identifiable methods developed in causal representation learning to dynamical systems. At the same time, we can leverage scalable differentiable solvers developed for differential equations to build models that are both identifiable and practical. Overall, we learn explicitly controllable models that isolate the trajectory-specific parameters for further downstream tasks such as out-of-distribution classification or treatment effect estimation. We experiment with a wind simulator with partially known factors of variation. We also apply the resulting model to real-world climate data and successfully answer downstream causal questions in line with existing literature on climate change. | Marrying Causal Representation Learning with Dynamical Systems for Science | [
"Dingling Yao",
"Caroline Muller",
"Francesco Locatello"
] | NeurIPS.cc/2024/Conference | 2405.13888 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MU27zjHBcW | @inproceedings{
wang2024deplm,
title={De{PLM}: Denoising Protein Language Models for Property Optimization},
author={Zeyuan Wang and Keyan Ding and Ming Qin and Xiaotong Li and Xiang Zhuang and Yu Zhao and Jianhua Yao and Qiang Zhang and Huajun Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MU27zjHBcW}
} | Protein optimization is a fundamental biological task aimed at enhancing theperformance of proteins by modifying their sequences. Computational methodsprimarily rely on evolutionary information (EI) encoded by protein languagemodels (PLMs) to predict fitness landscape for optimization. However, thesemethods suffer from a few limitations. (1) Evolutionary processes involve thesimultaneous consideration of multiple functional properties, often overshadowingthe specific property of interest. (2) Measurements of these properties tend to betailored to experimental conditions, leading to reduced generalizability of trainedmodels to novel proteins. To address these limitations, we introduce DenoisingProtein Language Models (DePLM), a novel approach that refines the evolutionaryinformation embodied in PLMs for improved protein optimization. Specifically, weconceptualize EI as comprising both property-relevant and irrelevant information,with the latter acting as “noise” for the optimization task at hand. Our approachinvolves denoising this EI in PLMs through a diffusion process conducted in therank space of property values, thereby enhancing model generalization and ensuringdataset-agnostic learning. Extensive experimental results have demonstrated thatDePLM not only surpasses the state-of-the-art in mutation effect prediction butalso exhibits strong generalization capabilities for novel proteins. | DePLM: Denoising Protein Language Models for Property Optimization | [
"Zeyuan Wang",
"Keyan Ding",
"Ming Qin",
"Xiaotong Li",
"Xiang Zhuang",
"Yu Zhao",
"Jianhua Yao",
"Qiang Zhang",
"Huajun Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MTMShU5QaC | @inproceedings{
li2024aligning,
title={Aligning Diffusion Models by Optimizing Human Utility},
author={Shufan Li and Konstantinos Kallidromitis and Akash Gokul and Yusuke Kato and Kazuki Kozuka},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MTMShU5QaC}
} | We present Diffusion-KTO, a novel approach for aligning text-to-image diffusion models by formulating the alignment objective as the maximization of expected human utility. Unlike previous methods, Diffusion-KTO does not require collecting pairwise preference data nor training a complex reward model. Instead, our objective uses per-image binary feedback signals, e.g. likes or dislikes, to align the model with human preferences. After fine-tuning using Diffusion-KTO, text-to-image diffusion models exhibit improved performance compared to existing techniques, including supervised fine-tuning and Diffusion-DPO, both in terms of human judgment and automatic evaluation metrics such as PickScore and ImageReward. Overall, Diffusion-KTO unlocks the potential of leveraging readily available per-image binary preference signals and broadens the applicability of aligning text-to-image diffusion models with human preferences. | Aligning Diffusion Models by Optimizing Human Utility | [
"Shufan Li",
"Konstantinos Kallidromitis",
"Akash Gokul",
"Yusuke Kato",
"Kazuki Kozuka"
] | NeurIPS.cc/2024/Conference | 2404.04465 | [
"https://github.com/jacklishufan/diffusion-kto"
] | https://huggingface.co/papers/2404.04465 | 0 | 13 | 1 | 5 | [
"jacklishufan/diffusion-kto"
] | [] | [] | [
"jacklishufan/diffusion-kto"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MSsQDWUWpd | @inproceedings{
wang2024analysis,
title={Analysis of Corrected Graph Convolutions},
author={Robert Wang and Aseem Baranwal and Kimon Fountoulakis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MSsQDWUWpd}
} | Machine learning for node classification on graphs is a prominent area driven by applications such as recommendation systems. State-of-the-art models often use multiple graph convolutions on the data, as empirical evidence suggests they can enhance performance. However, it has been shown empirically and theoretically, that too many graph convolutions can degrade performance significantly, a phenomenon known as oversmoothing. In this paper, we provide a rigorous theoretical analysis, based on the two-class contextual stochastic block model (CSBM), of the performance of vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. We perform a spectral analysis for $k$ rounds of corrected graph convolutions, and we provide results for partial and exact classification. For partial classification, we show that each round of convolution can reduce the misclassification error exponentially up to a saturation level, after which performance does not worsen. We also extend this analysis to the multi-class setting with features distributed according to a Gaussian mixture model. For exact classification, we show that the separability threshold can be improved exponentially up to $O({\log{n}}/{\log\log{n}})$ corrected convolutions. | Analysis of Corrected Graph Convolutions | [
"Robert Wang",
"Aseem Baranwal",
"Kimon Fountoulakis"
] | NeurIPS.cc/2024/Conference | 2405.13987 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MSSRhxwZP7 | @inproceedings{
shan2024learning,
title={Learning Disentangled Representations for Perceptual Point Cloud Quality Assessment via Mutual Information Minimization},
author={Ziyu Shan and Yujie Zhang and Yipeng Liu and Yiling Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MSSRhxwZP7}
} | No-Reference Point Cloud Quality Assessment (NR-PCQA) aims to objectively assess the human perceptual quality of point clouds without relying on pristine-quality point clouds for reference. It is becoming increasingly significant with the rapid advancement of immersive media applications such as virtual reality (VR) and augmented reality (AR). However, current NR-PCQA models attempt to indiscriminately learn point cloud content and distortion representations within a single network, overlooking their distinct contributions to quality information. To address this issue, we propose DisPA, a novel disentangled representation learning framework for NR-PCQA. The framework trains a dual-branch disentanglement network to minimize mutual information (MI) between representations of point cloud content and distortion. Specifically, to fully disentangle representations, the two branches adopt different philosophies: the content-aware encoder is pretrained by a masked auto-encoding strategy, which can allow the encoder to capture semantic information from rendered images of distorted point clouds; the distortion-aware encoder takes a mini-patch map as input, which forces the encoder to focus on low-level distortion patterns. Furthermore, we utilize an MI estimator to estimate the tight upper bound of the actual MI and further minimize it to achieve explicit representation disentanglement. Extensive experimental results demonstrate that DisPA outperforms state-of-the-art methods on multiple PCQA datasets. | Learning Disentangled Representations for Perceptual Point Cloud Quality Assessment via Mutual Information Minimization | [
"Ziyu Shan",
"Yujie Zhang",
"Yipeng Liu",
"Yiling Xu"
] | NeurIPS.cc/2024/Conference | 2411.07936 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MRO2QhydPF | @inproceedings{
tian2024reinforcement,
title={Reinforcement Learning with Adaptive Regularization for Safe Control of Critical Systems},
author={Haozhe Tian and Homayoun Hamedmoghadam and Robert Noel Shorten and Pietro Ferraro},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MRO2QhydPF}
} | Reinforcement Learning (RL) is a powerful method for controlling dynamic systems, but its learning mechanism can lead to unpredictable actions that undermine the safety of critical systems. Here, we propose RL with Adaptive Regularization (RL-AR), an algorithm that enables safe RL exploration by combining the RL policy with a policy regularizer that hard-codes the safety constraints. RL-AR performs policy combination via a "focus module," which determines the appropriate combination depending on the state—relying more on the safe policy regularizer for less-exploited states while allowing unbiased convergence for well-exploited states. In a series of critical control applications, we demonstrate that RL-AR not only ensures safety during training but also achieves a return competitive with the standards of model-free RL that disregards safety. | Reinforcement Learning with Adaptive Regularization for Safe Control of Critical Systems | [
"Haozhe Tian",
"Homayoun Hamedmoghadam",
"Robert Noel Shorten",
"Pietro Ferraro"
] | NeurIPS.cc/2024/Conference | 2404.15199 | [
"https://github.com/haozhetian/rl-ar"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MQIET1VfoV | @inproceedings{
mcclellan2024boosting,
title={Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance},
author={Joshua McClellan and Naveed Haghani and John Winder and Furong Huang and Pratap Tokekar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MQIET1VfoV}
} | Multi-Agent Reinforcement Learning (MARL) struggles with sample inefficiency and poor generalization [1]. These challenges are partially due to a lack of structure or inductive bias in the neural networks typically used in learning the policy. One such form of structure that is commonly observed in multi-agent scenarios is symmetry. The field of Geometric Deep Learning has developed Equivariant Graph Neural Networks (EGNN) that are equivariant (or symmetric) to rotations, translations, and reflections of nodes. Incorporating equivariance has been shown to improve learning efficiency and decrease error [ 2 ]. In this paper, we demonstrate that EGNNs improve the sample efficiency and generalization in MARL. However, we also show that a naive application of EGNNs to MARL results in poor early exploration due to a bias in the EGNN structure. To mitigate this bias, we present Exploration-enhanced Equivariant Graph Neural Networks or E2GN2. We compare E2GN2 to other common function approximators using common MARL benchmarks MPE and SMACv2. E2GN2 demonstrates a significant improvement in sample efficiency, greater final reward convergence, and a 2x-5x gain in over standard GNNs in our generalization tests. These results pave the way for more reliable and effective solutions in complex multi-agent systems. | Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance | [
"Joshua McClellan",
"Naveed Haghani",
"John Winder",
"Furong Huang",
"Pratap Tokekar"
] | NeurIPS.cc/2024/Conference | 2410.02581 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MPidsCd9e7 | @inproceedings{
woodruff2024adversarially,
title={Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters},
author={David Woodruff and Samson Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MPidsCd9e7}
} | In the adversarial streaming model, the input is a sequence of adaptive updates that defines an underlying dataset and the goal is to approximate, collect, or compute some statistic while using space sublinear in the size of the dataset. In 2022, Ben-Eliezer, Eden, and Onak showed a dense-sparse trade-off technique that elegantly combined sparse recovery with known techniques using differential privacy and sketch switching to achieve adversarially robust algorithms for $L_p$ estimation and other algorithms on turnstile streams. However, there has been no progress since, either in terms of achievability or impossibility. In this work, we first give improved algorithms for adversarially robust $L_p$-heavy hitters, utilizing deterministic turnstile heavy-hitter algorithms with better tradeoffs. We then utilize our heavy-hitter algorithm to reduce the problem to estimating the frequency moment of the tail vector. We give a new algorithm for this problem in the classical streaming setting, which achieves additive error and uses space independent in the size of the tail. We then leverage these ingredients to give an improved algorithm for adversarially robust $L_p$ estimation on turnstile streams. We believe that our results serve as an important conceptual message, demonstrating that there is no inherent barrier at the previous state-of-the-art. | Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters | [
"David Woodruff",
"Samson Zhou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MPJ3oXtTZl | @inproceedings{
he2024gretriever,
title={G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering},
author={Xiaoxin He and Yijun Tian and Yifei Sun and Nitesh V Chawla and Thomas Laurent and Yann LeCun and Xavier Bresson and Bryan Hooi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MPJ3oXtTZl}
} | Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface. In response to a user's questions, our method provides textual replies and highlights the relevant parts of the graph. While existing works integrate large language models (LLMs) and graph neural networks (GNNs) in various ways, they mostly focus on either conventional graph tasks (such as node, edge, and graph classification), or on answering simple graph queries on small or synthetic graphs. In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. Toward this goal, we first develop a Graph Question Answering (GraphQA) benchmark with data collected from different tasks. Then, we propose our \textit{G-Retriever} method, introducing the first retrieval-augmented generation (RAG) approach for general textual graphs, which can be fine-tuned to enhance graph understanding via soft prompting. To resist hallucination and to allow for textual graphs that greatly exceed the LLM's context window size, \textit{G-Retriever} performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem. Empirical evaluations show that our method outperforms baselines on textual graph tasks from multiple domains, scales well with larger graph sizes, and mitigates hallucination.~\footnote{Our codes and datasets are available at: \url{https://github.com/XiaoxinHe/G-Retriever}} | G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering | [
"Xiaoxin He",
"Yijun Tian",
"Yifei Sun",
"Nitesh V Chawla",
"Thomas Laurent",
"Yann LeCun",
"Xavier Bresson",
"Bryan Hooi"
] | NeurIPS.cc/2024/Conference | 2402.07630 | [
"https://github.com/xiaoxinhe/g-retriever"
] | https://huggingface.co/papers/2402.07630 | 0 | 1 | 0 | 8 | [
"alfiannajih/g-retriever-resume-reviewer"
] | [] | [] | [
"alfiannajih/g-retriever-resume-reviewer"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MP7j58lbWO | @inproceedings{
ding2024probing,
title={Probing Social Bias in Labor Market Text Generation by Chat{GPT}: A Masked Language Model Approach},
author={Lei Ding and Yang Hu and Nicole Denier and Enze Shi and Junxi Zhang and Qirui Hu and Karen D. Hughes and Linglong Kong and Bei Jiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=MP7j58lbWO}
} | As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language. | Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach | [
"Lei Ding",
"Yang Hu",
"Nicole Denier",
"Enze Shi",
"Junxi Zhang",
"Qirui Hu",
"Karen D. Hughes",
"Linglong Kong",
"Bei Jiang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.