bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=CiEynTpF28 | @inproceedings{
sun2024distributional,
title={Distributional Reinforcement Learning with Regularized Wasserstein Loss},
author={Ke Sun and Yingnan Zhao and Wulong Liu and Bei Jiang and Linglong Kong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CiEynTpF28}
} | The empirical success of distributional reinforcement learning (RL) highly relies on the choice of distribution divergence equipped with an appropriate distribution representation. In this paper, we propose \textit{Sinkhorn distributional RL (SinkhornDRL)}, which leverages Sinkhorn divergence—a regularized Wasserstein loss—to minimize the difference between current and target Bellman return distributions. Theoretically, we prove the contraction properties of SinkhornDRL, aligning with the interpolation nature of Sinkhorn divergence between Wasserstein distance and Maximum Mean Discrepancy (MMD). The introduced SinkhornDRL enriches the family of distributional RL algorithms, contributing to interpreting the algorithm behaviors compared with existing approaches by our investigation into their relationships. Empirically, we show that SinkhornDRL consistently outperforms or matches existing algorithms on the Atari games suite and particularly stands out in the multi-dimensional reward setting. \thanks{Code is available in \url{https://github.com/datake/SinkhornDistRL}.}. | Distributional Reinforcement Learning with Regularized Wasserstein Loss | [
"Ke Sun",
"Yingnan Zhao",
"Wulong Liu",
"Bei Jiang",
"Linglong Kong"
] | NeurIPS.cc/2024/Conference | 2202.00769 | [
"https://github.com/datake/sinkhorndistrl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Ci7II4CPwm | @inproceedings{
elahi2024fast,
title={Fast Proxy Experiment Design for Causal Effect Identification},
author={Sepehr Elahi and Sina Akbari and Jalal Etesami and Negar Kiyavash and Patrick Thiran},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Ci7II4CPwm}
} | Identifying causal effects is a key problem of interest across many disciplines. The two long-standing approaches to estimate causal effects are observational and experimental (randomized) studies. Observational studies can suffer from unmeasured confounding, which may render the causal effects unidentifiable. On the other hand, direct experiments on the target variable may be too costly or even infeasible to conduct. A middle ground between these two approaches is to estimate the causal effect of interest through proxy experiments, which are conducted on variables with a lower cost to intervene on compared to the main target. In an earlier work, we studied this setting and demonstrated that the problem of designing the optimal (minimum-cost) experiment for causal effect identification is NP-complete and provided a naive algorithm that may require solving exponentially many NP-hard problems as a sub-routine in the worst case. In this work, we provide a few reformulations of the problem that allow for designing significantly more efficient algorithms to solve it as witnessed by our extensive simulations. Additionally, we study the closely-related problem of designing experiments that enable us to identify a given effect through valid adjustments sets. | Fast Proxy Experiment Design for Causal Effect Identification | [
"Sepehr Elahi",
"Sina Akbari",
"Jalal Etesami",
"Negar Kiyavash",
"Patrick Thiran"
] | NeurIPS.cc/2024/Conference | 2407.05330 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ChnJ3W4HFG | @inproceedings{
stromberg2024enhancing,
title={Enhancing Robustness of Last Layer Two-Stage Fair Model Corrections},
author={Nathan Stromberg and Rohan Ayyagari and Sanmi Koyejo and Richard Nock and Lalitha Sankar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ChnJ3W4HFG}
} | Last-layer retraining methods have emerged as an efficient framework for correcting existing base models. Within this framework, several methods have been proposed to deal with correcting models for subgroup fairness with and without group membership information. Importantly, prior work has demonstrated that many methods are susceptible to noisy labels. To this end, we propose a drop-in correction for label noise in last-layer retraining, and demonstrate that it achieves state-of-the-art worst-group accuracy for a broad range of symmetric label noise and across a wide variety of datasets exhibiting spurious correlations. Our proposed approach uses label spreading on a latent nearest neighbors graph and has minimal computational overhead compared to existing methods. | Enhancing Robustness of Last Layer Two-Stage Fair Model Corrections | [
"Nathan Stromberg",
"Rohan Ayyagari",
"Sanmi Koyejo",
"Richard Nock",
"Lalitha Sankar"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CgGjT8EG8A | @inproceedings{
liu2024universal,
title={Universal Exact Compression of Differentially Private Mechanisms},
author={Yanxiao Liu and Wei-Ning Chen and Ayfer Ozgur and Cheuk Ting Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CgGjT8EG8A}
} | To reduce the communication cost of differential privacy mechanisms, we introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer while ensuring local differential privacy. Unlike previous simulation-based local differential privacy mechanisms, PPR exactly preserves the joint distribution of the data and the output of the original local randomizer. Hence, the PPR-compressed privacy mechanism retains all desirable statistical properties of the original privacy mechanism such as unbiasedness and Gaussianity. Moreover, PPR achieves a compression size within a logarithmic gap from the theoretical lower bound. Using the PPR, we give a new order-wise trade-off between communication, accuracy, central and local differential privacy for distributed mean estimation. Experiment results on distributed mean estimation show that PPR consistently gives a better trade-off between communication, accuracy and central differential privacy compared to the coordinate subsampled Gaussian mechanism, while also providing local differential privacy. | Universal Exact Compression of Differentially Private Mechanisms | [
"Yanxiao Liu",
"Wei-Ning Chen",
"Ayfer Ozgur",
"Cheuk Ting Li"
] | NeurIPS.cc/2024/Conference | 2405.20782 | [
"https://github.com/cheuktingli/poissonprivaterepr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CehOqpvOxG | @inproceedings{
zhou2024fair,
title={Fair Kernel K-Means: from Single Kernel to Multiple Kernel},
author={Peng Zhou and Rongwen Li and Liang Du},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CehOqpvOxG}
} | Kernel k-means has been widely studied in machine learning. However, existing kernel k-means methods often ignore the \textit{fairness} issue, which may cause discrimination. To address this issue, in this paper, we propose a novel Fair Kernel K-Means (FKKM) framework. In this framework, we first propose a new fairness regularization term that can lead to a fair partition of data. The carefully designed fairness regularization term has a similar form to the kernel k-means which can be seamlessly integrated into the kernel k-means framework. Then, we extend this method to the multiple kernel setting, leading to a Fair Multiple Kernel K-Means (FMKKM) method. We also provide some theoretical analysis of the generalization error bound, and based on this bound we give a strategy to set the hyper-parameter, which makes the proposed methods easy to use. At last, we conduct extensive experiments on both the single kernel and multiple kernel settings to compare the proposed methods with state-of-the-art methods to demonstrate their effectiveness. | Fair Kernel K-Means: from Single Kernel to Multiple Kernel | [
"Peng Zhou",
"Rongwen Li",
"Liang Du"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CeOwahuQic | @inproceedings{
xie2024can,
title={Can Large Language Model Agents Simulate Human Trust Behavior?},
author={Chengxing Xie and Canyu Chen and Feiran Jia and Ziyu Ye and Shiyang Lai and Kai Shu and Jindong Gu and Adel Bibi and Ziniu Hu and David Jurgens and James Evans and Philip Torr and Bernard Ghanem and Guohao Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CeOwahuQic}
} | Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one critical and elemental behavior in human interactions, trust, and investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe the biases of agent trust and differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including external manipulations and advanced reasoning strategies. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans beyond value alignment. We further illustrate broader implications of our discoveries for applications where trust is paramount. | Can Large Language Model Agents Simulate Human Trust Behavior? | [
"Chengxing Xie",
"Canyu Chen",
"Feiran Jia",
"Ziyu Ye",
"Shiyang Lai",
"Kai Shu",
"Jindong Gu",
"Adel Bibi",
"Ziniu Hu",
"David Jurgens",
"James Evans",
"Philip Torr",
"Bernard Ghanem",
"Guohao Li"
] | NeurIPS.cc/2024/Conference | 2402.04559 | [
"https://github.com/camel-ai/agent-trust"
] | https://huggingface.co/papers/2402.04559 | 2 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=CcmHlE6N6u | @inproceedings{
qu2024lushnerf,
title={LuSh-Ne{RF}: Lighting up and Sharpening Ne{RF}s for Low-light Scenes},
author={Zefan Qu and Ke Xu and Gerhard Petrus Hancke and Rynson W. H. Lau},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CcmHlE6N6u}
} | Neural Radiance Fields (NeRFs) have shown remarkable performances in producing novel-view images from high-quality scene images. However, hand-held low-light photography challenges NeRFs as the captured images may simultaneously suffer from low visibility, noise, and camera shakes.
While existing NeRF methods may handle either low light or motion, directly combining them or incorporating additional image-based enhancement methods does not work as these degradation factors are highly coupled.
We observe that noise in low-light images is always sharp regardless of camera shakes, which implies an implicit order of these degradation factors within the image formation process.
This inspires us to explore such an order to decouple and remove these degradation factors while training the NeRF.
To this end, we propose in this paper a novel model, named LuSh-NeRF, which can reconstruct a clean and sharp NeRF from a group of hand-held low-light images.
The key idea of LuSh-NeRF is to sequentially model noise and blur in the images via multi-view feature consistency and frequency information of NeRF, respectively.
Specifically, LuSh-NeRF includes a novel Scene-Noise Decomposition (SND) module for decoupling the noise from the scene representation and a novel Camera Trajectory Prediction (CTP) module for the estimation of camera motions based on low-frequency scene information.
To facilitate training and evaluations, we construct a new dataset containing both synthetic and real images.
Experiments show that LuSh-NeRF outperforms existing approaches. Our code and dataset can be found here: https://github.com/quzefan/LuSh-NeRF. | LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes | [
"Zefan Qu",
"Ke Xu",
"Gerhard Petrus Hancke",
"Rynson W. H. Lau"
] | NeurIPS.cc/2024/Conference | 2411.06757 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CcNw4mVIxo | @inproceedings{
cao2024spiking,
title={Spiking Neural Network as Adaptive Event Stream Slicer},
author={Jiahang Cao and Mingyuan Sun and Ziqing Wang and Hao Cheng and Qiang Zhang and shibo zhou and Renjing Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CcNw4mVIxo}
} | Event-based cameras are attracting significant interest as they provide rich edge information, high dynamic range, and high temporal resolution. Many state-of-the-art event-based algorithms rely on splitting the events into fixed groups, resulting in the omission of crucial temporal information, particularly when dealing with diverse motion scenarios (e.g., high/low speed). In this work, we propose SpikeSlicer, a novel-designed event processing framework capable of splitting events stream adaptively. SpikeSlicer utilizes a low-energy spiking neural network (SNN) to trigger event slicing. To guide the SNN to fire spikes at optimal time steps, we propose the Spiking Position-aware Loss (SPA-Loss) to modulate the neuron's state. Additionally, we develop a Feedback-Update training strategy that refines the slicing decisions using feedback from the downstream artificial neural network (ANN). Extensive experiments demonstrate that our method yields significant performance improvements in event-based object tracking and recognition. Notably, SpikeSlicer provides a brand-new SNN-ANN cooperation paradigm, where the SNN acts as an efficient, low-energy data processor to assist the ANN in improving downstream performance, injecting new perspectives and potential avenues of exploration. | Spiking Neural Network as Adaptive Event Stream Slicer | [
"Jiahang Cao",
"Mingyuan Sun",
"Ziqing Wang",
"Hao Cheng",
"Qiang Zhang",
"shibo zhou",
"Renjing Xu"
] | NeurIPS.cc/2024/Conference | 2410.02249 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Cc0ckJlJF2 | @inproceedings{
li2024reward,
title={Reward Machines for Deep {RL} in Noisy and Uncertain Environments},
author={Andrew C Li and Zizhao Chen and Toryn Q. Klassen and Pashootan Vaezipoor and Rodrigo Toro Icarte and Sheila A. McIlraith},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Cc0ckJlJF2}
} | Reward Machines provide an automaton-inspired structure for specifying instructions, safety constraints, and other temporally extended reward-worthy behaviour. By exposing the underlying structure of a reward function, they enable the decomposition of an RL task, leading to impressive gains in sample efficiency. Although Reward Machines and similar formal specifications have a rich history of application towards sequential decision-making problems, they critically rely on a ground-truth interpretation of the domain-specific vocabulary that forms the building blocks of the reward function—such ground-truth interpretations are elusive in the real world due in part to partial observability and noisy sensing. In this work, we explore the use of Reward Machines for Deep RL in noisy and uncertain environments. We characterize this problem as a POMDP and propose a suite of RL algorithms that exploit task structure under uncertain interpretation of the domain-specific vocabulary. Through theory and experiments, we expose pitfalls in naive approaches to this problem while simultaneously demonstrating how task structure can be successfully leveraged under noisy interpretations of the vocabulary. | Reward Machines for Deep RL in Noisy and Uncertain Environments | [
"Andrew C Li",
"Zizhao Chen",
"Toryn Q. Klassen",
"Pashootan Vaezipoor",
"Rodrigo Toro Icarte",
"Sheila A. McIlraith"
] | NeurIPS.cc/2024/Conference | 2406.00120 | [
"https://github.com/andrewli77/reward-machines-noisy-environments"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CbtkDWZzDq | @inproceedings{
nam2024ex,
title={Ex Uno Pluria: Insights on Ensembling in Low Precision Number Systems},
author={Giung Nam and Juho Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CbtkDWZzDq}
} | While ensembling deep neural networks has shown promise in improving generalization performance, scaling current ensemble methods for large models remains challenging. Given that recent progress in deep learning is largely driven by the scale, exemplified by the widespread adoption of large-scale neural network architectures, scalability emerges an increasingly critical issue for machine learning algorithms in the era of large-scale models. In this work, we first showcase the potential of low precision ensembling, where ensemble members are derived from a single model within low precision number systems in a training-free manner. Our empirical analysis demonstrates the effectiveness of our proposed low precision ensembling method compared to existing ensemble approaches. | Ex Uno Pluria: Insights on Ensembling in Low Precision Number Systems | [
"Giung Nam",
"Juho Lee"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CbHz30KeA4 | @inproceedings{
duan2024taming,
title={Taming ''data-hungry'' reinforcement learning? Stability in continuous state-action spaces},
author={Yaqi Duan and Martin J Wainwright},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CbHz30KeA4}
} | We introduce a novel framework for analyzing reinforcement learning (RL) in continuous state-action spaces, and use it to prove fast rates of convergence in both off-line and on-line settings. Our analysis highlights two key stability properties, relating to how changes in value functions and/or policies affect the Bellman operator and occupation measures. We argue that these properties are satisfied in many continuous state-action Markov decision processes. Our analysis also offers fresh perspectives on the roles of pessimism and optimism in off-line and on-line RL. | Taming "data-hungry" reinforcement learning? Stability in continuous state-action spaces | [
"Yaqi Duan",
"Martin J Wainwright"
] | NeurIPS.cc/2024/Conference | 2401.05233 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Cb3kcwYBgw | @inproceedings{
geisler2024spatiospectral,
title={Spatio-Spectral Graph Neural Networks},
author={Simon Geisler and Arthur Kosmala and Daniel Herbst and Stephan G{\"u}nnemann},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Cb3kcwYBgw}
} | Spatial Message Passing Graph Neural Networks (MPGNNs) are widely used for learning on graph-structured data. However, key limitations of *ℓ*-step MPGNNs are that their "receptive field" is typically limited to the *ℓ*-hop neighborhood of a node and that information exchange between distant nodes is limited by over-squashing. Motivated by these limitations, we propose *Spatio-Spectral Graph Neural Networks (S²GNNs)* – a new modeling paradigm for Graph Neural Networks (GNNs) that synergistically combines spatially and spectrally parametrized graph filters. Parameterizing filters partially in the frequency domain enables global yet efficient information propagation. We show that S²GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs. Further, rethinking graph convolutions at a fundamental level unlocks new design spaces. For example, S²GNNs allow for free positional encodings that make them strictly more expressive than the 1-Weisfeiler-Leman (WL) test. Moreover, to obtain general-purpose S²GNNs, we propose spectrally parametrized filters for directed graphs. S²GNNs outperform spatial MPGNNs, graph transformers, and graph rewirings, e.g., on the peptide long-range benchmark tasks, and are competitive with state-of-the-art sequence modeling. On a 40 GB GPU, S²GNNs scale to millions of nodes. | Spatio-Spectral Graph Neural Networks | [
"Simon Geisler",
"Arthur Kosmala",
"Daniel Herbst",
"Stephan Günnemann"
] | NeurIPS.cc/2024/Conference | 2405.19121 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Cb1Md0RvqF | @inproceedings{
foti2024uvfree,
title={{UV}-free Texture Generation with Denoising and Geodesic Heat Diffusion},
author={Simone Foti and Stefanos Zafeiriou and Tolga Birdal},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Cb1Md0RvqF}
} | Seams, distortions, wasted UV space, vertex-duplication, and varying resolution over the surface are the most prominent issues of the standard UV-based texturing of meshes. These issues are particularly acute when automatic UV-unwrapping techniques are used. For this reason, instead of generating textures in automatically generated UV-planes like most state-of-the-art methods, we propose to represent textures as coloured point-clouds whose colours are generated by a denoising diffusion probabilistic model constrained to operate on the surface of 3D objects. Our sampling and resolution agnostic generative model heavily relies on heat diffusion over the surface of the meshes for spatial communication between points. To enable processing of arbitrarily sampled point-cloud textures and ensure long-distance texture consistency we introduce a fast re-sampling of the mesh spectral properties used during the heat diffusion and introduce a novel heat-diffusion-based self-attention mechanism. Our code and pre-trained models are available at github.com/simofoti/UV3-TeD. | UV-free Texture Generation with Denoising and Geodesic Heat Diffusion | [
"Simone Foti",
"Stefanos Zafeiriou",
"Tolga Birdal"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/simofoti/uv3-ted"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CZwphz5vgz | @inproceedings{
sun2024occfusion,
title={OccFusion: Rendering Occluded Humans with Generative Diffusion Priors},
author={Adam Sun and Tiange Xiang and Scott Delp and Li Fei-Fei and Ehsan Adeli},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CZwphz5vgz}
} | Existing human rendering methods require every part of the human to be fully visible throughout the input video. However, this assumption does not hold in real-life settings where obstructions are common, resulting in only partial visibility of the human. Considering this, we present OccFusion, an approach that utilizes efficient 3D Gaussian splatting supervised by pretrained 2D diffusion models for efficient and high-fidelity human rendering. We propose a pipeline consisting of three stages. In the Initialization stage, complete human masks are generated from partial visibility masks. In the Optimization stage, 3D human Gaussians are optimized with additional supervisions by Score-Distillation Sampling (SDS) to create a complete geometry of the human. Finally, in the Refinement stage, in-context inpainting is designed to further improve rendering quality on the less observed human body parts. We evaluate OccFusion on ZJU-MoCap and challenging OcMotion sequences and found that it achieves state-of-the-art performance in the rendering of occluded humans. | OccFusion: Rendering Occluded Humans with Generative Diffusion Priors | [
"Adam Sun",
"Tiange Xiang",
"Scott Delp",
"Li Fei-Fei",
"Ehsan Adeli"
] | NeurIPS.cc/2024/Conference | 2407.00316 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CWhwKb0Q4k | @inproceedings{
schleich2024quantum,
title={Quantum Deep Equilibrium Models},
author={Philipp Schleich and Marta Skreta and Lasse Bj{\o}rn Kristensen and Rodrigo Vargas-Hernandez and Alan Aspuru-Guzik},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CWhwKb0Q4k}
} | The feasibility of variational quantum algorithms, the most popular correspondent of neural networks on noisy, near-term quantum hardware, is highly impacted by the circuit depth of the involved parametrized quantum circuits (PQCs). Higher depth increases expressivity, but also results in a detrimental accumulation of errors. Furthermore, the number of parameters involved in the PQC significantly influences the performance through the necessary number of measurements to evaluate gradients, which scales linearly with the number of parameters.
Motivated by this, we look at deep equilibrium models (DEQs), which mimic an infinite-depth, weight-tied network using a fraction of the memory by employing a root solver to find the fixed points of the network. In this work, we present Quantum Deep Equilibrium Models (QDEQs): a training paradigm that learns parameters of a quantum machine learning model given by a PQC using DEQs. To our knowledge, no work has yet explored the application of DEQs to QML models. We apply QDEQs to find the parameters of a quantum circuit in two settings: the first involves classifying MNIST-4 digits with 4 qubits; the second extends it to 10 classes of MNIST, FashionMNIST and CIFAR. We find that QDEQ is not only competitive with comparable existing baseline models, but also achieves higher performance than a network with 5 times more layers. This demonstrates that the QDEQ paradigm can be used to develop significantly more shallow quantum circuits for a given task, something which is essential for the utility of near-term quantum computers.
Our code is available at \url{https://github.com/martaskrt/qdeq}. | Quantum Deep Equilibrium Models | [
"Philipp Schleich",
"Marta Skreta",
"Lasse Bjørn Kristensen",
"Rodrigo Vargas-Hernandez",
"Alan Aspuru-Guzik"
] | NeurIPS.cc/2024/Conference | 2410.23940 | [
"https://github.com/martaskrt/qdeq"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CW0OVWEKKu | @inproceedings{
li2024exploring,
title={Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks},
author={Xin-Chun Li and Jin-Lin Tang and Bo Zhang and Lan Li and De-Chuan Zhan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CW0OVWEKKu}
} | Exploring the loss landscape offers insights into the inherent principles of deep neural networks (DNNs). Recent work suggests an additional asymmetry of the valley beyond the flat and sharp ones, yet without thoroughly examining its causes or implications. Our study methodically explores the factors affecting the symmetry of DNN valleys, encompassing (1) the dataset, network architecture, initialization, and hyperparameters that influence the convergence point; and (2) the magnitude and direction of the noise for 1D visualization. Our major observation shows that the {\it degree of sign consistency} between the noise and the convergence point is a critical indicator of valley symmetry. Theoretical insights from the aspects of ReLU activation and softmax function could explain the interesting phenomenon. Our discovery propels novel understanding and applications in the scenario of Model Fusion: (1) the efficacy of interpolating separate models significantly correlates with their sign consistency ratio, and (2) imposing sign alignment during federated learning emerges as an innovative approach for model parameter alignment. | Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks | [
"Xin-Chun Li",
"Jin-Lin Tang",
"Bo Zhang",
"Lan Li",
"De-Chuan Zhan"
] | NeurIPS.cc/2024/Conference | 2405.12489 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CVpuVe1N22 | @inproceedings{
hu2024uncertainty,
title={Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in {LLM}s},
author={Zhiyuan Hu and Chumin Liu and Xidong Feng and Yilun Zhao and See-Kiong Ng and Anh Tuan Luu and Junxian He and Pang Wei Koh and Bryan Hooi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CVpuVe1N22}
} | In the face of uncertainty, the ability to *seek information* is of fundamental importance. In many practical applications, such as medical diagnosis and troubleshooting, the information needed to solve the task is not initially given, and has to be actively sought by asking follow-up questions (for example, a doctor asking a patient for more details about their symptoms). In this work, we introduce **Uncertainty of Thoughts (UoT)**, an algorithm to augment large language models with the ability to actively seek information by asking effective questions. UoT combines:
1. An *uncertainty-aware simulation approach* which enables the model to simulate possible future scenarios and how likely they are to occur,
2. *Uncertainty-based rewards* motivated by information gain which incentivizes the model to seek information, and
3. A *reward propagation scheme* to select the optimal question to ask in a way that maximizes the expected reward.
In experiments on medical diagnosis, troubleshooting and the `20 Questions' game, UoT achieves an average performance improvement of 38.1% in the rate of successful task completion across multiple LLMs compared with direct prompting, and also improves efficiency (i.e., the number of questions needed to complete the task). | Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in LLMs | [
"Zhiyuan Hu",
"Chumin Liu",
"Xidong Feng",
"Yilun Zhao",
"See-Kiong Ng",
"Anh Tuan Luu",
"Junxian He",
"Pang Wei Koh",
"Bryan Hooi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CTvxvAcSJN | @inproceedings{
yang2024scenecraft,
title={SceneCraft: Layout-Guided 3D Scene Generation},
author={Xiuyu Yang and Yunze Man and Jun-Kun Chen and Yu-Xiong Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CTvxvAcSJN}
} | The creation of complex 3D scenes tailored to user specifications has been a tedious and challenging task with traditional 3D modeling tools. Although some pioneering methods have achieved automatic text-to-3D generation, they are generally limited to small-scale scenes with restricted control over the shape and texture. We introduce SceneCraft, a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences provided by users. Central to our method is a rendering-based technique, which converts 3D semantic layouts into multi-view 2D proxy maps. Furthermore, we design a semantic and depth conditioned diffusion model to generate multi-view images, which are used to learn a neural radiance field (NeRF) as the final scene representation. Without the constraints of panorama image generation, we surpass previous methods in supporting complicated indoor space generation beyond a single room, even as complicated as a whole multi-bedroom apartment with irregular shapes and layouts. Through experimental analysis, we demonstrate that our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality. | SceneCraft: Layout-Guided 3D Scene Generation | [
"Xiuyu Yang",
"Yunze Man",
"Jun-Kun Chen",
"Yu-Xiong Wang"
] | NeurIPS.cc/2024/Conference | 2410.09049 | [
"https://github.com/orangesodahub/scenecraft"
] | https://huggingface.co/papers/2410.09049 | 2 | 2 | 1 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=CTIFk7b9jU | @inproceedings{
yang2024bidirectional,
title={Bidirectional Recurrence for Cardiac Motion Tracking with Gaussian Process Latent Coding},
author={Jiewen Yang and Yiqun Lin and Bin Pu and Xiaomeng Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CTIFk7b9jU}
} | Quantitative analysis of cardiac motion is crucial for assessing cardiac function. This analysis typically uses imaging modalities such as MRI and Echocardiograms that capture detailed image sequences throughout the heartbeat cycle. Previous methods predominantly focused on the analysis of image pairs lacking consideration of the motion dynamics and spatial variability. Consequently, these methods often overlook the long-term relationships and regional motion characteristic of cardiac. To overcome these limitations, we introduce the GPTrack, a novel unsupervised framework crafted to fully explore the temporal and spatial dynamics of cardiac motion. The GPTrack enhances motion tracking by employing the sequential Gaussian Process in the latent space and encoding statistics by spatial information at each time stamp, which robustly promotes temporal consistency and spatial variability of cardiac dynamics. Also, we innovatively aggregate sequential information in a bidirectional recursive manner, mimicking the behavior of diffeomorphic registration to better capture consistent long-term relationships of motions across cardiac regions such as the ventricles and atria. Our GPTrack significantly improves the precision of motion tracking in both 3D and 4D medical images while maintaining computational efficiency. The code is available at: https://github.com/xmed-lab/GPTrack. | Bidirectional Recurrence for Cardiac Motion Tracking with Gaussian Process Latent Coding | [
"Jiewen Yang",
"Yiqun Lin",
"Bin Pu",
"Xiaomeng Li"
] | NeurIPS.cc/2024/Conference | 2410.20752 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CSjVSnvTbG | @inproceedings{
oliveira2024bisimulation,
title={Bisimulation Metrics are Optimal Transport Distances, and Can be Computed Efficiently},
author={Sergio Calo Oliveira and Anders Jonsson and Gergely Neu and Ludovic Schwartz and Javier Segovia-Aguas},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CSjVSnvTbG}
} | We propose a new framework for formulating optimal transport distances between Markov chains. Previously known formulations studied couplings between the entire joint distribution induced by the chains, and derived solutions via a reduction to dynamic programming (DP) in an appropriately defined Markov decision process. This formulation has, however, not led to particularly efficient algorithms so far, since computing the associated DP operators requires fully solving a static optimal transport problem, and these operators need to be applied numerous times during the overall optimization process. In this work, we develop an alternative perspective by considering couplings between a ``flattened'' version of the joint distributions that we call discounted occupancy couplings, and show that calculating optimal transport distances in the full space of joint distributions can be equivalently formulated as solving a linear program (LP) in this reduced space. This LP formulation formulation allows us to port several algorithmic ideas from other areas of optimal transport theory. In particular, our formulation makes it possible to introduce an appropriate notion of entropy regularization into the optimization problem, which in turn enables us to directly calculate optimal transport distances via a Sinkhorn-like method we call Sinkhorn Value Iteration (SVI). We show both theoretically and empirically that this method converges quickly to an optimal coupling, essentially at the same computational cost of running vanilla Sinkhorn in each pair of states. Along the way, we point out that our optimal transport distance exactly matches the common notion of bisimulation metrics between Markov chains, and thus our results also apply to computing such metrics, and in fact our algorithm turns out to be significantly more efficient than the best known methods developed so far for this purpose. | Bisimulation Metrics are Optimal Transport Distances, and Can be Computed Efficiently | [
"Sergio Calo Oliveira",
"Anders Jonsson",
"Gergely Neu",
"Ludovic Schwartz",
"Javier Segovia-Aguas"
] | NeurIPS.cc/2024/Conference | 2406.04056 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CMgxAaRqZh | @inproceedings{
zhao2024accelerating,
title={Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling},
author={Yiran Zhao and Wenyue Zheng and Tianle Cai and Do Xuan Long and Kenji Kawaguchi and Anirudh Goyal and Michael Shieh},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CMgxAaRqZh}
} | Safety of Large Language Models (LLMs) has become a central issue given their rapid progress and wide applications. Greedy Coordinate Gradient (GCG) is shown to be effective in constructing prompts containing adversarial suffixes to break the presumingly safe LLMs, but the optimization of GCG is time-consuming and limits its practicality. To reduce the time cost of GCG and enable more comprehensive studies of LLM safety, in this work, we study a new algorithm called $\texttt{Probe sampling}$ to accelerate the GCG algorithm. At the core of the algorithm is a mechanism that dynamically determines how similar a smaller draft model's predictions are to the target model's predictions for prompt candidates. When the target model is similar to the draft model, we rely heavily on the draft model to filter out a large number of potential prompt candidates to reduce the computation time. Probe sampling achieves up to $5.6$ times speedup using Llama2-7b-chat and leads to equal or improved attack success rate (ASR) on the AdvBench. Furthermore, probe sampling is also able to accelerate other prompt optimization techniques and adversarial attack methods, leading to acceleration of $1.8\times$ for AutoPrompt, $2.4\times$ for APE and $2.4\times$ for AutoDAN. | Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling | [
"Yiran Zhao",
"Wenyue Zheng",
"Tianle Cai",
"Do Xuan Long",
"Kenji Kawaguchi",
"Anirudh Goyal",
"Michael Shieh"
] | NeurIPS.cc/2024/Conference | 2403.01251 | [
"https://github.com/zhaoyiran924/probe-sampling"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CMc0jMY0Wr | @inproceedings{
vuursteen2024optimal,
title={Optimal Private and Communication Constraint Distributed Goodness-of-Fit Testing for Discrete Distributions in the Large Sample Regime},
author={Lasse Vuursteen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CMc0jMY0Wr}
} | We study distributed goodness-of-fit testing for discrete distribution under bandwidth and differential privacy constraints. Information constraint distributed goodness-of-fit testing is a problem that has received considerable attention recently. The important case of discrete distributions is theoretically well understood in the classical case where all data is available in one "central" location. In a federated setting, however, data is distributed across multiple "locations" (e.g. servers) and cannot readily be shared due to e.g. bandwidth or privacy constraints that each server needs to satisfy. We show how recently derived results for goodness-of-fit testing for the mean of a multivariate Gaussian model extend to the discrete distributions, by leveraging Le Cam's theory of statistical equivalence. In doing so, we derive matching minimax upper- and lower-bounds for the goodness-of-fit testing for discrete distributions under bandwidth or privacy constraints in the regime where number of samples held locally are large. | Optimal Private and Communication Constraint Distributed Goodness-of-Fit Testing for Discrete Distributions in the Large Sample Regime | [
"Lasse Vuursteen"
] | NeurIPS.cc/2024/Conference | 2411.01275 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CLxcLPfARc | @inproceedings{
schwinn2024soft,
title={Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source {LLM}s through the Embedding Space},
author={Leo Schwinn and David Dobre and Sophie Xhonneux and Gauthier Gidel and Stephan G{\"u}nnemann},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CLxcLPfARc}
} | Current research in adversarial robustness of LLMs focuses on \textit{discrete} input manipulations in the natural language space, which can be directly transferred to \textit{closed-source} models. However, this approach neglects the steady progression of \textit{open-source} models. As open-source models advance in capability, ensuring their safety becomes increasingly imperative. Yet, attacks tailored to open-source LLMs that exploit full model access remain largely unexplored. We address this research gap and propose the \textit{embedding space attack}, which directly attacks the \textit{continuous} embedding representation of input tokens.
We find that embedding space attacks circumvent model alignments and trigger harmful behaviors more efficiently than discrete attacks or model fine-tuning. Additionally, we demonstrate that models compromised by embedding attacks can be used to create discrete jailbreaks in natural language. Lastly, we present a novel threat model in the context of unlearning and show that embedding space attacks can extract supposedly deleted information from unlearned LLMs across multiple datasets and models. Our findings highlight embedding space attacks as an important threat model in open-source LLMs. | Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space | [
"Leo Schwinn",
"David Dobre",
"Sophie Xhonneux",
"Gauthier Gidel",
"Stephan Günnemann"
] | NeurIPS.cc/2024/Conference | 2402.09063 | [
"https://github.com/schwinnl/llm_embedding_attack"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CL9k2PaUQb | @inproceedings{
hosseini2024the,
title={The Surprising Effectiveness of {SP} Voting with Partial Preferences},
author={Hadi Hosseini and Debmalya Mandal and Amrit Puhan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CL9k2PaUQb}
} | We consider the problem of recovering the ground truth ordering (ranking, top-$k$, or others) over a large number of alternatives.
The wisdom of crowd is a heuristic approach based on Condorcet's Jury theorem to address this problem through collective opinions.
This approach fails to recover the ground truth when the majority of the crowd is misinformed. The \emph{surprisingly popular} (SP) algorithm~\citep{prelec2017solution} is an alternative approach that is able to recover the ground truth even when experts are in minority. The SP algorithm requires the voters to predict other voters' report in the form of a full probability distribution over all rankings of alternatives. However, when the number of alternatives, $m$, is large, eliciting the prediction report or even the vote over $m$ alternatives might be too costly.
In this paper, we design a scalable alternative of the SP algorithm which only requires eliciting partial preferences from the voters, and propose new variants of the SP algorithm. In particular, we propose two versions---\emph{Aggregated-SP} and \emph{Partial-SP}---that ask voters to report vote and prediction on a subset of size $k$ ($\ll m$) in terms of top alternative, partial rank, or an approval set. Through a large-scale crowdsourcing experiment on MTurk, we show that both of our approaches outperform conventional preference aggregation algorithms for the recovery of ground truth rankings, when measured in terms of Kendall-Tau distance and Spearman's $\rho$. We further analyze the collected data and demonstrate that voters' behavior in the experiment, including the minority of the experts, and the SP phenomenon, can be correctly simulated by a concentric mixtures of Mallows model. Finally, we provide theoretical bounds on the sample complexity of SP algorithms with partial rankings to demonstrate the theoretical guarantees of the proposed methods. | The Surprising Effectiveness of SP Voting with Partial Preferences | [
"Hadi Hosseini",
"Debmalya Mandal",
"Amrit Puhan"
] | NeurIPS.cc/2024/Conference | 2406.00870 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CKgNgKmHYp | @inproceedings{
zhuang2024hydra,
title={{HYDRA}: Model Factorization Framework for Black-Box {LLM} Personalization},
author={Yuchen Zhuang and Haotian Sun and Yue Yu and Rushi Qiang and Qifan Wang and Chao Zhang and Bo Dai},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CKgNgKmHYp}
} | Personalization has emerged as a critical research area in modern intelligent systems, focusing on mining users' behavioral history and adapting to their preferences for delivering tailored experiences. Despite the remarkable few-shot capabilities exhibited by black-box large language models (LLMs), the inherent opacity of their model parameters presents significant challenges in aligning the generated output with individual expectations. Existing solutions have primarily focused on prompt design to incorporate user-specific profiles and behaviors; however, such approaches often struggle to generalize effectively due to their inability to capture shared knowledge among all users. To address these challenges, we propose HYDRA, a model factorization framework that captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation. In order to capture user-specific behavior patterns, we first train a reranker to prioritize the most useful information from top-retrieved relevant historical records.
By combining the prioritized history with the corresponding query, we train an adapter to align the output with individual user-specific preferences, eliminating the reliance on access to inherent model parameters of black-box LLMs. Both the reranker and the adapter can be decomposed into a base model with multiple user-specific heads, resembling a hydra. The base model maintains shared knowledge across users, while the multiple personal heads capture user-specific preferences. Experimental results demonstrate that \method outperforms existing state-of-the-art prompt-based methods by an average relative improvement of 9.01% across five diverse personalization tasks in the LaMP benchmark. | HYDRA: Model Factorization Framework for Black-Box LLM Personalization | [
"Yuchen Zhuang",
"Haotian Sun",
"Yue Yu",
"Rushi Qiang",
"Qifan Wang",
"Chao Zhang",
"Bo Dai"
] | NeurIPS.cc/2024/Conference | 2406.02888 | [
"https://github.com/night-chen/hydra"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CIcMZGLyZW | @inproceedings{
li2024neurosymbolic,
title={Neuro-Symbolic Data Generation for Math Reasoning},
author={Zenan Li and Zhi Zhou and Yuan Yao and Xian Zhang and Yu-Feng Li and Chun Cao and Fan Yang and Xiaoxing Ma},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CIcMZGLyZW}
} | A critical question about Large Language Models (LLMs) is whether their apparent deficiency in mathematical reasoning is inherent, or merely a result of insufficient exposure to high-quality mathematical data. To explore this, we developed an automated method for generating high-quality, supervised mathematical datasets. The method carefully mutates existing math problems, ensuring both diversity and validity of the newly generated problems. This is achieved by a neuro-symbolic data generation framework combining the intuitive informalization strengths of LLMs, and the precise symbolic reasoning of math solvers along with projected Markov chain Monte Carlo sampling in the highly-irregular symbolic space.
Empirical experiments demonstrate the high quality of data generated by the proposed method, and that the LLMs, specifically LLaMA-2 and Mistral, when realigned with the generated data, surpass their state-of-the-art counterparts. | Neuro-Symbolic Data Generation for Math Reasoning | [
"Zenan Li",
"Zhi Zhou",
"Yuan Yao",
"Xian Zhang",
"Yu-Feng Li",
"Chun Cao",
"Fan Yang",
"Xiaoxing Ma"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CIRPE1bSmV | @inproceedings{
xing2024mitigating,
title={Mitigating Object Hallucination via Concentric Causal Attention},
author={Yun Xing and Yiheng Li and Ivan Laptev and Shijian Lu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CIRPE1bSmV}
} | Recent Large Vision Language Models (LVLMs) present remarkable zero-shot conversational and reasoning capabilities given multimodal queries. Nevertheless, they suffer from object hallucination, a phenomenon where LVLMs are prone to generate textual responses not factually aligned with image inputs. Our pilot study reveals that object hallucination is closely tied with Rotary Position Encoding (RoPE), a widely adopted positional dependency modeling design in existing LVLMs. Due to the long-term decay in RoPE, LVLMs tend to hallucinate more when relevant visual cues are distant from instruction tokens in the multimodal input sequence, Additionally, we observe a similar effect when reversing the sequential order of visual tokens during multimodal alignment. Our tests indicate that long-term decay in RoPE poses challenges to LVLMs while capturing visual-instruction interactions across long distances. We propose Concentric Causal Attention (CCA), a simple yet effective positional alignment strategy that mitigates the impact of RoPE long-term decay in LVLMs by naturally reducing relative distance between visual and instruction tokens. With CCA, visual tokens can better interact with instruction tokens, thereby enhancing model's perception capability and alleviating object hallucination. Without bells and whistles, our positional alignment method surpasses existing hallucination mitigation strategies by large margins on multiple object hallucination benchmarks. | Mitigating Object Hallucination via Concentric Causal Attention | [
"Yun Xing",
"Yiheng Li",
"Ivan Laptev",
"Shijian Lu"
] | NeurIPS.cc/2024/Conference | 2410.15926 | [
"https://github.com/xing0047/cca-llava"
] | https://huggingface.co/papers/2410.15926 | 1 | 14 | 2 | 4 | [
"xing0047/cca-llava-1.5-7b"
] | [
"jiahaonie/MMRel"
] | [] | [
"xing0047/cca-llava-1.5-7b"
] | [
"jiahaonie/MMRel"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=CIHdlhfrOo | @inproceedings{
zhang2024selfsupervised,
title={Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation},
author={Ruize Zhang and Sheng Tang and Juan Cao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CIHdlhfrOo}
} | Recently, there have been some works studying self-supervised adversarial training, a learning paradigm that learns robust features without labels. While those works have narrowed the performance gap between self-supervised adversarial training (SAT) and supervised adversarial training (supervised AT), a well-established formulation of SAT and its connections with supervised AT are under-explored. Based on a simple SAT benchmark, we find that SAT still faces the problem of large robust generalization gap and degradation on natural samples. We hypothesize this is due to the lack of data complexity and model regularization and propose a method named as DAQ-SDP (Diverse Augmented Queries Self-supervised Double Perturbation). We first challenge the previous conclusion that complex data augmentations degrade robustness in SAT by using diversely augmented samples as queries to guide adversarial training. Inspired by previous works in supervised AT, we then incorporate a self-supervised double perturbation scheme to self-supervised learning (SSL), which promotes robustness transferable to downstream classification. Our work can be seamlessly combined with models pretrained by different SSL frameworks without revising the learning objectives and helps to bridge the gap between SAT and AT. Our method also improves both robust and natural accuracies across different SSL frameworks. Our code is available at https://github.com/rzzhang222/DAQ-SDP. | Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation | [
"Ruize Zhang",
"Sheng Tang",
"Juan Cao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CFez7MFUFd | @inproceedings{
zhang2024crossscale,
title={Cross-Scale Self-Supervised Blind Image Deblurring via Implicit Neural Representation},
author={Tianjing Zhang and Yuhui Quan and Hui Ji},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CFez7MFUFd}
} | Blind image deblurring (BID) is an important yet challenging image recovery problem. Most existing deep learning methods require supervised training with ground truth (GT) images. This paper introduces a self-supervised method for BID that does not require GT images. The key challenge is to regularize the training to prevent over-fitting due to the absence of GT images. By leveraging an exact relationship among the blurred image, latent image, and blur kernel across consecutive scales, we propose an effective cross-scale consistency loss. This is implemented by representing the image and kernel with implicit neural representations (INRs), whose resolution-free property enables consistent yet efficient computation for network training across multiple scales. Combined with a progressively coarse-to-fine training scheme, the proposed method significantly outperforms existing self-supervised methods in extensive experiments. | Cross-Scale Self-Supervised Blind Image Deblurring via Implicit Neural Representation | [
"Tianjing Zhang",
"Yuhui Quan",
"Hui Ji"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CEnoUjEqNx | @inproceedings{
leme2024convergence,
title={Convergence of No-Swap-Regret Dynamics in Self-Play},
author={Renato Paes Leme and Georgios Piliouras and Jon Schneider},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CEnoUjEqNx}
} | In this paper, we investigate the question of whether no-swap-regret dynamics have stronger convergence properties in repeated games than regular no-external-regret dynamics. We prove that in almost all symmetric zero-sum games under symmetric initializations of the agents, no-swap-regret dynamics in self-play are guaranteed to converge in a strong ``frequent-iterate'' sense to the Nash equilibrium: in all but a vanishing fraction of the rounds, the players must play a strategy profile close to a symmetric Nash equilibrium. Remarkably, relaxing any of these three constraints, i.e. by allowing either i) asymmetric initial conditions, or ii) an asymmetric game or iii) no-external regret dynamics suffices to destroy this result and lead to complex non-equilibrating or even chaotic behavior.
In a dual type of result, we show that the power of no-swap-regret dynamics comes at a cost of imposing a time-asymmetry on its inputs. While no-external-regret dynamics can be completely determined by the cumulative reward vector received by each player, we show there does not exist any general no-swap-regret dynamics defined on the same state space. In fact, we prove that any no-swap-regret learning algorithm must play a time-asymmetric function over the set of previously observed rewards, ruling out any dynamics based on a symmetric function of the current set of rewards. | Convergence of No-Swap-Regret Dynamics in Self-Play | [
"Renato Paes Leme",
"Georgios Piliouras",
"Jon Schneider"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CEJ1mYPgWw | @inproceedings{
wu2024minds,
title={Mind's Eye of {LLM}s: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models},
author={Wenshan Wu and Shaoguang Mao and Yadong Zhang and Yan Xia and Li Dong and Lei Cui and Furu Wei},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CEJ1mYPgWw}
} | Large language models (LLMs) have exhibited impressive performance in language comprehension and various reasoning tasks. However, their abilities in spatial reasoning, a crucial aspect of human cognition, remain relatively unexplored. Human possess a remarkable ability to create mental images of unseen objects and actions through a process known as the Mind's Eye, enabling the imagination of the unseen world. Inspired by this cognitive capacity, we propose Visualization-of-Thought (VoT) prompting. VoT aims to elicit spatial reasoning of LLMs by visualizing their reasoning traces, thereby guiding subsequent reasoning steps. We employed VoT for multi-hop spatial reasoning tasks, including natural language navigation, visual navigation, and visual tiling in 2D grid worlds. Experimental results demonstrated that VoT significantly enhances the spatial reasoning abilities of LLMs. Notably, VoT outperformed existing multimodal large language models (MLLMs) in these tasks. While VoT works surprisingly well on LLMs, the ability to generate mental images to facilitate spatial reasoning resembles the mind's eye process, suggesting its potential viability in MLLMs. Please find the dataset and codes in our [project page](https://microsoft.github.io/visualization-of-thought). | Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models | [
"Wenshan Wu",
"Shaoguang Mao",
"Yadong Zhang",
"Yan Xia",
"Li Dong",
"Lei Cui",
"Furu Wei"
] | NeurIPS.cc/2024/Conference | 2404.03622 | [
""
] | https://huggingface.co/papers/2404.03622 | 1 | 4 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=CDe2zBPioj | @inproceedings{
zhang2024dropedge,
title={DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks},
author={Zeyu Zhang and Lu Li and Shuyan Wan and Sijie Wang and Zhiyi Wang and Zhiyuan Lu and Dong Hao and Wanli.Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CDe2zBPioj}
} | Signed graphs can model friendly or antagonistic relations where edges are annotated with a positive or negative sign. The main downstream task in signed graph analysis is $\textit{link sign prediction}$. Signed Graph Neural Networks (SGNNs) have been widely used for signed graph representation learning. While significant progress has been made in SGNNs research, two issues (i.e., graph sparsity and unbalanced triangles) persist in the current SGNN models. We aim to alleviate these issues through data augmentation ($\textit{DA}$) techniques which have demonstrated effectiveness in improving the performance of graph neural networks. However, most graph augmentation methods are primarily aimed at graph-level and node-level tasks (e.g., graph classification and node classification) and cannot be directly applied to signed graphs due to the lack of side information (e.g., node features and label information) in available real-world signed graph datasets. Random $\textit{DropEdge} $is one of the few $\textit{DA}$ methods that can be directly used for signed graph data augmentation, but its effectiveness is still unknown. In this paper, we first provide the generalization bound for the SGNN model and demonstrate from both experimental and theoretical perspectives that the random $\textit{DropEdge}$ cannot improve the performance of link sign prediction. Therefore, we propose a novel signed graph augmentation method, $\underline{S}$igned $\underline{G}$raph $\underline{A}$ugmentation framework (SGA). Specifically, SGA first integrates a structure augmentation module to detect candidate edges solely based on network information. Furthermore, SGA incorporates a novel strategy to select beneficial candidates. Finally, SGA introduces a novel data augmentation perspective to enhance the training process of SGNNs. Experiment results on six real-world datasets demonstrate that SGA effectively boosts the performance of diverse SGNN models, achieving improvements of up to 32.3\% in F1-micro for SGCN on the Slashdot dataset in the link sign prediction task. | DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks | [
"Zeyu Zhang",
"Lu Li",
"Shuyan Wan",
"Sijie Wang",
"Zhiyi Wang",
"Zhiyuan Lu",
"Dong Hao",
"Wanli.Li"
] | NeurIPS.cc/2024/Conference | 2409.19620 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CAdBTYBlOv | @inproceedings{
lin2024improving,
title={Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes},
author={Jihao Andreas Lin and Shreyas Padhy and Bruno Kacper Mlodozeniec and Javier Antoran and Jos{\'e} Miguel Hern{\'a}ndez-Lobato},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CAdBTYBlOv}
} | Scaling hyperparameter optimisation to very large datasets remains an open problem in the Gaussian process community. This paper focuses on iterative methods, which use linear system solvers, like conjugate gradients, alternating projections or stochastic gradient descent, to construct an estimate of the marginal likelihood gradient. We discuss three key improvements which are applicable across solvers: (i) a pathwise gradient estimator, which reduces the required number of solver iterations and amortises the computational cost of making predictions, (ii) warm starting linear system solvers with the solution from the previous step, which leads to faster solver convergence at the cost of negligible bias, (iii) early stopping linear system solvers after a limited computational budget, which synergises with warm starting, allowing solver progress to accumulate over multiple marginal likelihood steps. These techniques provide speed-ups of up to $72\times$ when solving to tolerance, and decrease the average residual norm by up to $7\times$ when stopping early. | Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes | [
"Jihao Andreas Lin",
"Shreyas Padhy",
"Bruno Kacper Mlodozeniec",
"Javier Antoran",
"José Miguel Hernández-Lobato"
] | NeurIPS.cc/2024/Conference | 2405.18457 | [
"https://github.com/jandylin/iterative-gaussian-processes"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CAC74VuMWX | @inproceedings{
hu2024an,
title={An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models},
author={Yunzhe Hu and Difan Zou and Dong Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=CAC74VuMWX}
} | Deep neural networks have long been criticized for being black-box. To unveil the inner workings of modern neural architectures, a recent work proposed an information-theoretic objective function called Sparse Rate Reduction (SRR) and interpreted its unrolled optimization as a Transformer-like model called Coding Rate Reduction Transformer (CRATE). However, the focus of the study was primarily on the basic implementation, and whether this objective is optimized in practice and its causal relationship to generalization remain elusive. Going beyond this study, we derive different implementations by analyzing layer-wise behaviors of CRATE, both theoretically and empirically. To reveal the predictive power of SRR on generalization, we collect a set of model variants induced by varied implementations and hyperparameters and evaluate SRR as a complexity measure based on its correlation with generalization. Surprisingly, we find out that SRR has a positive correlation coefficient and outperforms other baseline measures, such as path-norm and sharpness-based ones. Furthermore, we show that generalization can be improved using SRR as regularization on benchmark image classification datasets. We hope this paper can shed light on leveraging SRR to design principled models and study their generalization ability. | An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models | [
"Yunzhe Hu",
"Difan Zou",
"Dong Xu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=C62d2nS3KO | @inproceedings{
salimans2024multistep,
title={Multistep Distillation of Diffusion Models via Moment Matching},
author={Tim Salimans and Thomas Mensink and Jonathan Heek and Emiel Hoogeboom},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C62d2nS3KO}
} | We present a new method for making diffusion models faster to sample. The method distills many-step diffusion models into few-step models by matching conditional expectations of the clean data given noisy data along the sampling trajectory. Our approach extends recently proposed one-step methods to the multi-step case, and provides a new perspective by interpreting these approaches in terms of moment matching. By using up to 8 sampling steps, we obtain distilled models that outperform not only their one-step versions but also their original many-step teacher models, obtaining new state-of-the-art results on the Imagenet dataset. We also show promising results on a large text-to-image model where we achieve fast generation of high resolution images directly in image space, without needing autoencoders or upsamplers. | Multistep Distillation of Diffusion Models via Moment Matching | [
"Tim Salimans",
"Thomas Mensink",
"Jonathan Heek",
"Emiel Hoogeboom"
] | NeurIPS.cc/2024/Conference | 2406.04103 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=C4zmR2kyP8 | @inproceedings{
gao2024stabilizing,
title={Stabilizing Zero-Shot Prediction: A Novel Antidote to Forgetting in Continual Vision-Language Tasks},
author={Zijian Gao and Xingxing Zhang and Kele Xu and Xinjun Mao and Huaimin Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C4zmR2kyP8}
} | Continual learning (CL) empowers pre-trained vision-language (VL) models to efficiently adapt to a sequence of downstream tasks. However, these models often encounter challenges in retaining previously acquired skills due to parameter shifts and limited access to historical data. In response, recent efforts focus on devising specific frameworks and various replay strategies, striving for a typical learning-forgetting trade-off. Surprisingly, both our empirical research and theoretical analysis demonstrate that the stability of the model in consecutive zero-shot predictions serves as a reliable indicator of its anti-forgetting capabilities for previously learned tasks.
Motivated by these insights, we develop a novel replay-free CL method named ZAF (Zero-shot Antidote to Forgetting), which preserves acquired knowledge through a zero-shot stability regularization applied to wild data in a plug-and-play manner. To enhance efficiency in adapting to new tasks and seamlessly access historical models, we introduce a parameter-efficient EMA-LoRA neural architecture based on the Exponential Moving Average (EMA). ZAF utilizes new data for low-rank adaptation (LoRA), complemented by a zero-shot antidote on wild data, effectively decoupling learning from forgetting. Our extensive experiments demonstrate ZAF's superior performance and robustness in pre-trained models across various continual VL concept learning tasks, achieving leads of up to 3.70\%, 4.82\%, and 4.38\%, along with at least a 10x acceleration in training speed on three benchmarks, respectively. Additionally, our zero-shot antidote significantly reduces forgetting in existing models by at least 6.37\%. Our code is available at https://github.com/Zi-Jian-Gao/Stabilizing-Zero-Shot-Prediction-ZAF. | Stabilizing Zero-Shot Prediction: A Novel Antidote to Forgetting in Continual Vision-Language Tasks | [
"Zijian Gao",
"Xingxing Zhang",
"Kele Xu",
"Xinjun Mao",
"Huaimin Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=C4SInFLvuB | @inproceedings{
nagler2024reshuffling,
title={Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization},
author={Thomas Nagler and Lennart Schneider and Bernd Bischl and Matthias Feurer},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C4SInFLvuB}
} | Hyperparameter optimization is crucial for obtaining peak performance of machine learning models. The standard protocol evaluates various hyperparameter configurations using a resampling estimate of the generalization error to guide optimization and select a final hyperparameter configuration. Without much evidence, paired resampling splits, i.e., either a fixed train-validation split or a fixed cross-validation scheme, are often recommended. We show that, surprisingly, reshuffling the splits for every configuration often improves the final model's generalization performance on unseen data. Our theoretical analysis explains how reshuffling affects the asymptotic behavior of the validation loss surface and provides a bound on the expected regret in the limiting regime. This bound connects the potential benefits of reshuffling to the signal and noise characteristics of the underlying optimization problem. We confirm our theoretical results in a controlled simulation study and demonstrate the practical usefulness of reshuffling in a large-scale, realistic hyperparameter optimization experiment. While reshuffling leads to test performances that are competitive with using fixed splits, it drastically improves results for a single train-validation holdout protocol and can often make holdout become competitive with standard CV while being computationally cheaper. | Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization | [
"Thomas Nagler",
"Lennart Schneider",
"Bernd Bischl",
"Matthias Feurer"
] | NeurIPS.cc/2024/Conference | 2405.15393 | [
"https://github.com/sumny/reshuffling"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=C4NbtYnyQg | @inproceedings{
lin2024flipped,
title={Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery},
author={Haonan Lin and Wenbin An and Jiahao Wang and Yan Chen and Feng Tian and Mengmeng Wang and QianYing Wang and Guang Dai and Jingdong Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C4NbtYnyQg}
} | Recent advancements have shown promise in applying traditional Semi-Supervised Learning strategies to the task of Generalized Category Discovery (GCD). Typically, this involves a teacher-student framework in which the teacher imparts knowledge to the student to classify categories, even in the absence of explicit labels. Nevertheless, GCD presents unique challenges, particularly the absence of priors for new classes, which can lead to the teacher's misguidance and unsynchronized learning with the student, culminating in suboptimal outcomes. In our work, we delve into why traditional teacher-student designs falter in generalized category discovery as compared to their success in closed-world semi-supervised learning. We identify inconsistent pattern learning as the crux of this issue and introduce FlipClass—a method that dynamically updates the teacher to align with the student's attention, instead of maintaining a static teacher reference. Our teacher-attention-update strategy refines the teacher's focus based on student feedback, promoting consistent pattern recognition and synchronized learning across old and new classes. Extensive experiments on a spectrum of benchmarks affirm that FlipClass significantly surpasses contemporary GCD methods, establishing new standards for the field. | Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery | [
"Haonan Lin",
"Wenbin An",
"Jiahao Wang",
"Yan Chen",
"Feng Tian",
"Mengmeng Wang",
"QianYing Wang",
"Guang Dai",
"Jingdong Wang"
] | NeurIPS.cc/2024/Conference | 2409.19659 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=C3tEX45hJX | @inproceedings{
shribak2024diffusion,
title={Diffusion Spectral Representation for Reinforcement Learning},
author={Dmitry Shribak and Chen-Xiao Gao and Yitong Li and Chenjun Xiao and Bo Dai},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C3tEX45hJX}
} | Diffusion-based models have achieved notable empirical successes in reinforcement learning (RL) due to their expressiveness in modeling complex distributions. Despite existing methods being promising, the key challenge of extending existing methods for broader real-world applications lies in the computational cost at inference time, i.e., sampling from a diffusion model is considerably slow as it often requires tens to hundreds of iterations to generate even one sample. To circumvent this issue, we propose to leverage the flexibility of diffusion models for RL from a representation learning perspective. In particular, by exploiting the connection between diffusion models and energy-based models, we develop Diffusion Spectral Representation (Diff-SR), a coherent algorithm framework that enables extracting sufficient representations for value functions in Markov decision processes (MDP) and partially observable Markov decision processes (POMDP). We further demonstrate how Diff-SR facilitates efficient policy optimization and practical algorithms while explicitly bypassing the difficulty and inference cost of sampling from the diffusion model. Finally, we provide comprehensive empirical studies to verify the benefits of Diff-SR in delivering robust and advantageous performance across various benchmarks with both fully and partially observable settings. | Diffusion Spectral Representation for Reinforcement Learning | [
"Dmitry Shribak",
"Chen-Xiao Gao",
"Yitong Li",
"Chenjun Xiao",
"Bo Dai"
] | NeurIPS.cc/2024/Conference | 2406.16121 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=C3ZHiij9QE | @inproceedings{
chen2024vlmimic,
title={{VLM}imic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions},
author={Guangyan Chen and Meiling Wang and Te Cui and Yao Mu and Haoyang Lu and Tianxing Zhou and Zicai Peng and Mengxiao Hu and Haizhou Li and Li Yuan and Yi Yang and Yufeng Yue},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C3ZHiij9QE}
} | Visual imitation learning (VIL) provides an efficient and intuitive strategy for robotic systems to acquire novel skills. Recent advancements in Vision Language Models (VLMs) have demonstrated remarkable performance in vision and language reasoning capabilities for VIL tasks. Despite the progress, current VIL methods naively employ VLMs to learn high-level plans from human videos, relying on pre-defined motion primitives for executing physical interactions, which remains a major bottleneck. In this work, we present VLMimic, a novel paradigm that harnesses VLMs to directly learn even fine-grained action levels, only given a limited number of human videos. Specifically, VLMimic first grounds object-centric movements from human videos, and learns skills using hierarchical constraint representations, facilitating the derivation of skills with fine-grained action levels from limited human videos. These skills are refined and updated through an iterative comparison strategy, enabling efficient adaptation to unseen environments. Our extensive experiments exhibit that our VLMimic, using only 5 human videos, yields significant improvements of over 27% and 21% in RLBench and real-world manipulation tasks, and surpasses baselines by more than 37% in long-horizon tasks. Code and videos are available on our anonymous homepage. | VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions | [
"Guangyan Chen",
"Meiling Wang",
"Te Cui",
"Yao Mu",
"Haoyang Lu",
"Tianxing Zhou",
"Zicai Peng",
"Mengxiao Hu",
"Haizhou Li",
"Li Yuan",
"Yi Yang",
"Yufeng Yue"
] | NeurIPS.cc/2024/Conference | 2410.20927 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=C3JCwbMXbU | @inproceedings{
peng2024advancing,
title={Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler},
author={Kunyu Peng and Di Wen and Kailun Yang and Ao Luo and Yufan Chen and Jia Fu and M. Saquib Sarfraz and Alina Roitberg and Rainer Stiefelhagen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C3JCwbMXbU}
} | In Open-Set Domain Generalization (OSDG), the model is exposed to both new variations of data appearance (domains) and open-set conditions, where both known and novel categories are present at test time. The challenges of this task arise from the dual need to generalize across diverse domains and accurately quantify category novelty, which is critical for applications in dynamic environments. Recently, meta-learning techniques have demonstrated superior results in OSDG, effectively orchestrating the meta-train and -test tasks by employing varied random categories and predefined domain partition strategies. These approaches prioritize a well-designed training schedule over traditional methods that focus primarily on data augmentation and the enhancement of discriminative feature learning.
The prevailing meta-learning models in OSDG typically utilize a predefined sequential domain scheduler to structure data partitions. However, a crucial aspect that remains inadequately explored is the influence brought by strategies of domain schedulers during training.
In this paper, we observe that an adaptive domain scheduler benefits more in OSDG compared with prefixed sequential and random domain schedulers. We propose the Evidential Bi-Level Hardest Domain Scheduler (EBiL-HaDS) to achieve an adaptive domain scheduler. This method strategically sequences domains by assessing their reliabilities in utilizing a follower network, trained with confidence scores learned in an evidential manner, regularized by max rebiasing discrepancy, and optimized in a bilevel manner. We verify our approach on three OSDG benchmarks, i.e., PACS, DigitsDG, and OfficeHome. The results show that our method substantially improves OSDG performance and achieves more discriminative embeddings for both the seen and unseen categories, underscoring the advantage of a judicious domain scheduler for the generalizability to unseen domains and unseen categories. The source code is publicly available at https://github.com/KPeng9510/EBiL-HaDS. | Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler | [
"Kunyu Peng",
"Di Wen",
"Kailun Yang",
"Ao Luo",
"Yufan Chen",
"Jia Fu",
"M. Saquib Sarfraz",
"Alina Roitberg",
"Rainer Stiefelhagen"
] | NeurIPS.cc/2024/Conference | 2409.17555 | [
"https://github.com/kpeng9510/ebil-hads"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=C2xCLze1kS | @inproceedings{
huang2024reverse,
title={Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion Inference},
author={Xunpeng Huang and Difan Zou and Hanze Dong and Yi Zhang and Yian Ma and Tong Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C2xCLze1kS}
} | To generate data from trained diffusion models, most inference algorithms, such as DDPM, DDIM, and other variants, rely on discretizing the reverse SDEs or their equivalent ODEs. In this paper, we view such approaches as decomposing the entire denoising diffusion process into several segments, each corresponding to a reverse transition kernel (RTK) sampling subproblem. Specifically, DDPM uses a Gaussian approximation for the RTK, resulting in low per-subproblem complexity but requiring a large number of segments (i.e., subproblems), which is conjectured to be inefficient. To address this, we develop a general RTK framework that enables a more balanced subproblem decomposition, resulting in $\tilde O(1)$ subproblems, each with strongly log-concave targets. We then propose leveraging two fast sampling algorithms, the Metropolis-Adjusted Langevin Algorithm (MALA) and Underdamped Langevin Dynamics (ULD), for solving these strongly log-concave subproblems. This gives rise to the RTK-MALA and RTK-ULD algorithms for diffusion inference. In theory, we further develop the convergence guarantees for RTK-MALA and RTK-ULD in total variation (TV) distance: RTK-ULD can achieve $\epsilon$ target error within $\tilde{\mathcal O}(d^{1/2}\epsilon^{-1})$ under mild conditions, and RTK-MALA enjoys a $\mathcal{O}(d^{2}\log(d/\epsilon))$ convergence rate under slightly stricter conditions. These theoretical results surpass the state-of-the-art convergence rates for diffusion inference and are well supported by numerical experiments. | Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion Inference | [
"Xunpeng Huang",
"Difan Zou",
"Hanze Dong",
"Yi Zhang",
"Yian Ma",
"Tong Zhang"
] | NeurIPS.cc/2024/Conference | 2405.16387 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=C1hiRbzEH9 | @inproceedings{
yao2024outofdistribution,
title={Out-Of-Distribution Detection with Diversification (Provably)},
author={Haiyun Yao and Zongbo Han and Huazhu Fu and Xi Peng and Qinghua Hu and Changqing Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C1hiRbzEH9}
} | Out-of-distribution (OOD) detection is crucial for ensuring reliable deployment of machine learning models. Recent advancements focus on utilizing easily accessible auxiliary outliers (e.g., data from the web or other datasets) in training. However, we experimentally reveal that these methods still struggle to generalize their detection capabilities to unknown OOD data, due to the limited diversity of the auxiliary outliers collected. Therefore, we thoroughly examine this problem from the generalization perspective and demonstrate that a more diverse set of auxiliary outliers is essential for enhancing the detection capabilities. However, in practice, it is difficult and costly to collect sufficiently diverse auxiliary outlier data. Therefore, we propose a simple yet practical approach with a theoretical guarantee, termed Diversity-induced Mixup for OOD detection (diverseMix), which enhances the diversity of auxiliary outlier set for training in an efficient way. Extensive experiments show that diverseMix achieves superior performance on commonly used and recent challenging large-scale benchmarks, which further confirm the importance of the diversity of auxiliary outliers. | Out-Of-Distribution Detection with Diversification (Provably) | [
"Haiyun Yao",
"Zongbo Han",
"Huazhu Fu",
"Xi Peng",
"Qinghua Hu",
"Changqing Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=C1d3VVfdVG | @inproceedings{
shi2024unchosen,
title={Unchosen Experts Can Contribute Too: Unleashing MoE Models{\textquoteright} Power by Self-Contrast},
author={Chufan Shi and Cheng Yang and Xinyu Zhu and Jiahao Wang and Taiqiang Wu and Siheng Li and Deng Cai and Yujiu Yang and Yu Meng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C1d3VVfdVG}
} | Mixture-of-Experts (MoE) has emerged as a prominent architecture for scaling model size while maintaining computational efficiency. In MoE, each token in the input sequence activates a different subset of experts determined by a routing mechanism. However, the unchosen experts in MoE models do not contribute to the output, potentially leading to underutilization of the model's capacity.
In this work, we first conduct exploratory studies to demonstrate that increasing the number of activated experts does not necessarily improve and can even degrade the output quality. Then, we show that output distributions from an MoE model using different routing strategies substantially differ, indicating that different experts do not always act synergistically.
Motivated by these findings, we propose **S**elf-**C**ontrast **M**ixture-**o**f-**E**xperts (SCMoE), a training-free strategy that utilizes unchosen experts in a self-contrast manner during inference.
In SCMoE, the next-token probabilities are determined by contrasting the outputs from strong and weak activation using the same MoE model.
Our method is conceptually simple and computationally lightweight, as it incurs minimal latency compared to greedy decoding.
Experiments on several benchmarks (GSM8K, StrategyQA, MBPP and HumanEval) demonstrate that SCMoE can consistently enhance Mixtral 8x7B’s reasoning capability across various domains. For example, it improves the accuracy on GSM8K from 61.79 to 66.94.
Moreover, combining SCMoE with self-consistency yields additional gains, increasing major@20 accuracy from 75.59 to 78.31. | Unchosen Experts Can Contribute Too: Unleashing MoE Models’ Power by Self-Contrast | [
"Chufan Shi",
"Cheng Yang",
"Xinyu Zhu",
"Jiahao Wang",
"Taiqiang Wu",
"Siheng Li",
"Deng Cai",
"Yujiu Yang",
"Yu Meng"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/davidfanzz/scmoe"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=C0EhyoPpTN | @inproceedings{
pals2024inferring,
title={Inferring stochastic low-rank recurrent neural networks from neural data},
author={Matthijs Pals and A Erdem Sa{\u{g}}tekin and Felix C Pei and Manuel Gloeckler and Jakob H. Macke},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=C0EhyoPpTN}
} | A central aim in computational neuroscience is to relate the activity of large populations of neurons to an underlying dynamical system. Models of these neural dynamics should ideally be both interpretable and fit the observed data well. Low-rank recurrent neural networks (RNNs) exhibit such interpretability by having tractable dynamics. However, it is unclear how to best fit low-rank RNNs to data consisting of noisy observations of an underlying stochastic system. Here, we propose to fit stochastic low-rank RNNs with variational sequential Monte Carlo methods. We validate our method on several datasets consisting of both continuous and spiking neural data, where we obtain lower dimensional latent dynamics than current state of the art methods. Additionally, for low-rank models with piecewise linear nonlinearities, we show how to efficiently identify all fixed points in polynomial rather than exponential cost in the number of units, making analysis of the inferred dynamics tractable for large RNNs. Our method both elucidates the dynamical systems underlying experimental recordings and provides a generative model whose trajectories match observed variability. | Inferring stochastic low-rank recurrent neural networks from neural data | [
"Matthijs Pals",
"A Erdem Sağtekin",
"Felix C Pei",
"Manuel Gloeckler",
"Jakob H. Macke"
] | NeurIPS.cc/2024/Conference | 2406.16749 | [
"https://github.com/mackelab/smc_rnns"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BxPa7Sn5Zq | @inproceedings{
huang2024learning,
title={Learning Interaction-aware 3D Gaussian Splatting for One-shot Hand Avatars},
author={Xuan Huang and Hanhui Li and Wanquan Liu and Xiaodan Liang and Yiqiang Yan and Yuhao Cheng and CHENQIANG GAO},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BxPa7Sn5Zq}
} | In this paper, we propose to create animatable avatars for interacting hands with 3D Gaussian Splatting (GS) and single-image inputs. Existing GS-based methods designed for single subjects often yield unsatisfactory results due to limited input views, various hand poses, and occlusions. To address these challenges, we introduce a novel two-stage interaction-aware GS framework that exploits cross-subject hand priors and refines 3D Gaussians in interacting areas. Particularly, to handle hand variations, we disentangle the 3D presentation of hands into optimization-based identity maps and learning-based latent geometric features and neural texture maps. Learning-based features are captured by trained networks to provide reliable priors for poses, shapes, and textures, while optimization-based identity maps enable efficient one-shot fitting of out-of-distribution hands. Furthermore, we devise an interaction-aware attention module and a self-adaptive Gaussian refinement module. These modules enhance image rendering quality in areas with intra- and inter-hand interactions, overcoming the limitations of existing GS-based methods. Our proposed method is validated via extensive experiments on the large-scale InterHand2.6M dataset, and it significantly improves the state-of-the-art performance in image quality. Code and models will be released upon acceptance. | Learning Interaction-aware 3D Gaussian Splatting for One-shot Hand Avatars | [
"Xuan Huang",
"Hanhui Li",
"Wanquan Liu",
"Xiaodan Liang",
"Yiqiang Yan",
"Yuhao Cheng",
"CHENQIANG GAO"
] | NeurIPS.cc/2024/Conference | 2410.08840 | [
"https://github.com/xuanhuang0/guassianhand"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BtCrHwiBHP | @inproceedings{
cutkosky2024fully,
title={Fully Unconstrained Online Learning},
author={Ashok Cutkosky and Zakaria Mhammedi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BtCrHwiBHP}
} | We provide a technique for OLO that obtains regret $G\|w_\star\|\sqrt{T\log(\|w_\star\|G\sqrt{T})} + \|w_\star\|^2 + G^2$ on $G$-Lipschitz losses for any comparison point $w_\star$ without knowing either $G$ or $\|w_\star\|$. Importantly, this matches the optimal bound $G\|w_\star\|\sqrt{T}$ available with such knowledge (up to logarithmic factors), unless either $\|w_\star\|$ or $G$ is so large that even $G\|w_\star\|\sqrt{T}$ is roughly linear in $T$. Thus, at a high level it matches the optimal bound in all cases in which one can achieve sublinear regret. | Fully Unconstrained Online Learning | [
"Ashok Cutkosky",
"Zakaria Mhammedi"
] | NeurIPS.cc/2024/Conference | 2405.20540 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BrvLTxEx08 | @inproceedings{
kalogiannis2024learning,
title={Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem},
author={Fivos Kalogiannis and Jingming Yan and Ioannis Panageas},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BrvLTxEx08}
} | We study the problem of learning a Nash equilibrium (NE) in Markov games which is a cornerstone in multi-agent reinforcement learning (MARL). In particular, we focus on infinite-horizon adversarial team Markov games (ATMGs) in which agents that share a common reward function compete against a single opponent, *the adversary*. These games unify two-player zero-sum Markov games and Markov potential games, resulting in a setting that encompasses both collaboration and competition. Kalogiannis et al. (2023) provided an efficient equilibrium computation algorithm for ATMGs which presumes knowledge of the reward and transition functions and has no sample complexity guarantees. We contribute a learning algorithm that utilizes MARL policy gradient methods with iteration and sample complexity that is polynomial in the approximation error $\epsilon$ and the natural parameters of the ATMG, resolving the main caveats of the solution by (Kalogiannis et al., 2023). It is worth noting that previously, the existence of learning algorithms for NE was known for Markov two-player zero-sum and potential games but not for ATMGs.
Seen through the lens of min-max optimization, computing a NE in these games consists a nonconvex--nonconcave saddle-point problem. Min-max optimization has received an extensive study. Nevertheless, the case of nonconvex--nonconcave landscapes remains elusive: in full generality, finding saddle-points is computationally intractable (Daskalakis et al., 2021). We circumvent the aforementioned intractability by developing techniques that exploit the hidden structure of the objective function via a nonconvex--concave reformulation. However, this introduces a challenge of a feasibility set with coupled constraints. We tackle these challenges by establishing novel techniques for optimizing weakly-smooth nonconvex functions, extending the framework of (Devolder et al., 2014). | Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem | [
"Fivos Kalogiannis",
"Jingming Yan",
"Ioannis Panageas"
] | NeurIPS.cc/2024/Conference | 2410.05673 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BrPZMOQiSN | @inproceedings{
yasuda2024sequentialattention,
title={SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization},
author={Taisuke Yasuda and Kyriakos Axiotis and Gang Fu and Mohammadhossein Bateni and Vahab Mirrokni},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BrPZMOQiSN}
} | Neural network pruning is a key technique towards engineering large yet scalable, interpretable, and generalizable models. Prior work on the subject has developed largely along two orthogonal directions: (1) differentiable pruning for efficiently and accurately scoring the importance of parameters, and (2) combinatorial optimization for efficiently searching over the space of sparse models. We unite the two approaches, both theoretically and empirically, to produce a coherent framework for structured neural network pruning in which differentiable pruning guides combinatorial optimization algorithms to select the most important sparse set of parameters. Theoretically, we show how many existing differentiable pruning techniques can be understood as nonconvex regularization for group sparse optimization, and prove that for a wide class of nonconvex regularizers, the global optimum is unique, group-sparse, and provably yields an approximate solution to a sparse convex optimization problem. The resulting algorithm that we propose, SequentialAttention++, advances the state of the art in large-scale neural network block-wise pruning tasks on the ImageNet and Criteo datasets. | SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization | [
"Taisuke Yasuda",
"Kyriakos Axiotis",
"Gang Fu",
"Mohammadhossein Bateni",
"Vahab Mirrokni"
] | NeurIPS.cc/2024/Conference | 2402.17902 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BptJGaPn9C | @inproceedings{
shahverdikondori2024qwo,
title={{QWO}: Speeding Up Permutation-Based Causal Discovery in Li{GAM}s},
author={Mohammad Shahverdikondori and Ehsan Mokhtarian and Negar Kiyavash},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BptJGaPn9C}
} | Causal discovery is essential for understanding relationships among variables of interest in many scientific domains. In this paper, we focus on permutation-based methods for learning causal graphs in Linear Gaussian Acyclic Models (LiGAMs), where the permutation encodes a causal ordering of the variables. Existing methods in this setting are not scalable due to their high computational complexity. These methods are comprised of two main components: (i) constructing a specific DAG, $\mathcal{G}^\pi$, for a given permutation $\pi$, which represents the best structure that can be learned from the available data while adhering to $\pi$, and (ii) searching over the space of permutations (i.e., causal orders) to minimize the number of edges in $\mathcal{G}^\pi$. We introduce QWO, a novel approach that significantly enhances the efficiency of computing $\mathcal{G}^\pi$ for a given permutation $\pi$. QWO has a speed-up of $O(n^2)$ ($n$ is the number of variables) compared to the state-of-the-art BIC-based method, making it highly scalable. We show that our method is theoretically sound and can be integrated into existing search strategies such as GRASP and hill-climbing-based methods to improve their performance. | QWO: Speeding Up Permutation-Based Causal Discovery in LiGAMs | [
"Mohammad Shahverdikondori",
"Ehsan Mokhtarian",
"Negar Kiyavash"
] | NeurIPS.cc/2024/Conference | 2410.23155 | [
"https://github.com/ban-epfl/QWO"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BpJ6OTfWw3 | @inproceedings{
liang2024clustering,
title={Clustering then Propagation: Select Better Anchors for Knowledge Graph Embedding},
author={KE LIANG and Yue Liu and Hao Li and Lingyuan Meng and Suyuan Liu and Siwei Wang and sihang zhou and Xinwang Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BpJ6OTfWw3}
} | Traditional knowledge graph embedding (KGE) models map entities and relations to unique embedding vectors in a shallow lookup manner. As the scale of data becomes larger, this manner will raise unaffordable computational costs. Anchor-based strategies have been treated as effective ways to alleviate such efficiency problems by propagation on representative entities instead of the whole graph. However, most existing anchor-based KGE models select the anchors in a primitive manner, which limits their performance. To this end, we propose a novel anchor-based strategy for KGE, i.e., a relational clustering-based anchor selection strategy (RecPiece), where two characteristics are leveraged, i.e., (1) representative ability of the cluster centroids and (2) descriptive ability of relation types in KGs. Specifically, we first perform clustering over features of factual triplets instead of entities, where cluster number is naturally set as number of relation types since each fact can be characterized by its relation in KGs. Then, representative triplets are selected around the clustering centroids, further mapped into corresponding anchor entities. Extensive experiments on six datasets show that RecPiece achieves higher performances but comparable or even fewer parameters compared to previous anchor-based KGE models, indicating that our model can select better anchors in a more scalable way. | Clustering then Propagation: Select Better Anchors for Knowledge Graph Embedding | [
"KE LIANG",
"Yue Liu",
"Hao Li",
"Lingyuan Meng",
"Suyuan Liu",
"Siwei Wang",
"sihang zhou",
"Xinwang Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BmwcbNYkuH | @inproceedings{
tomar2024are,
title={Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology},
author={Dhananjay Tomar and Alexander Binder and Andreas Kleppe},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BmwcbNYkuH}
} | Domain generalisation in computational histopathology is challenging because the images are substantially affected by differences among hospitals due to factors like fixation and staining of tissue and imaging equipment. We hypothesise that focusing on nuclei can improve the out-of-domain (OOD) generalisation in cancer detection. We propose a simple approach to improve OOD generalisation for cancer detection by focusing on nuclear morphology and organisation, as these are domain-invariant features critical in cancer detection. Our approach integrates original images with nuclear segmentation masks during training, encouraging the model to prioritise nuclei and their spatial arrangement. Going beyond mere data augmentation, we introduce a regularisation technique that aligns the representations of masks and original images. We show, using multiple datasets, that our method improves OOD generalisation and also leads to increased robustness to image corruptions and adversarial attacks. The source code is available at https://github.com/undercutspiky/SFL/ | Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology | [
"Dhananjay Tomar",
"Alexander Binder",
"Andreas Kleppe"
] | NeurIPS.cc/2024/Conference | 2411.09373 | [
"https://github.com/undercutspiky/sfl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BmG3NgH5xu | @inproceedings{
chen2024ferero,
title={{FERERO}: A Flexible Framework for Preference-Guided Multi-Objective Learning},
author={Lisha Chen and A F M Saif and Yanning Shen and Tianyi Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BmG3NgH5xu}
} | Finding specific preference-guided Pareto solutions that represent different trade-offs among multiple objectives is critical yet challenging in multi-objective problems.
Existing methods are restrictive in preference definitions and/or their theoretical guarantees.
In this work, we introduce a Flexible framEwork for pREfeRence-guided multi-Objective learning (**FERERO**) by casting it as a constrained vector optimization problem.
Specifically, two types of preferences are incorporated into this formulation -- the *relative preference* defined by the partial ordering induced by a polyhedral cone, and the *absolute preference* defined by constraints that are linear functions of the objectives.
To solve this problem, convergent algorithms are developed with both single-loop and stochastic variants.
Notably, this is the *first single-loop primal algorithm* for constrained vector optimization to our knowledge.
The proposed algorithms adaptively adjust to both constraint and objective values, eliminating the need to solve different subproblems at different stages of constraint satisfaction.
Experiments on multiple benchmarks demonstrate the proposed method is very competitive in finding preference-guided optimal solutions.
Code is available at https://github.com/lisha-chen/FERERO/. | FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning | [
"Lisha Chen",
"A F M Saif",
"Yanning Shen",
"Tianyi Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Bjh4mcYs20 | @inproceedings{
zeng2024effective,
title={Effective Exploration Based on the Structural Information Principles},
author={Xianghua Zeng and Hao Peng and Angsheng Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Bjh4mcYs20}
} | Traditional information theory provides a valuable foundation for Reinforcement Learning (RL), particularly through representation learning and entropy maximiza tion for agent exploration. However, existing methods primarily concentrate on modeling the uncertainty associated with RL’s random variables, neglecting the in herent structure within the state and action spaces. In this paper, we propose a novel Structural Information principles-based Effective Exploration framework, namely SI2E. Structural mutual information between two variables is defined to address the single-variable limitation in structural information, and an innovative embedding principle is presented to capture dynamics-relevant state-action representations. The SI2E analyzes value differences in the agent’s policy between state-action pairs and minimizes structural entropy to derive the hierarchical state-action struc ture, referred to as the encoding tree. Under this tree structure, value-conditional structural entropy is defined and maximized to design an intrinsic reward mechanism that avoids redundant transitions and promotes enhanced coverage in the state-action space. Theoretical connections are established between SI2E and classical information-theoretic methodologies, highlighting our framework’s rationality and advantage. Comprehensive evaluations in the MiniGrid, MetaWorld, and DeepMind Control Suite benchmarks demonstrate that SI2E significantly outperforms state-of-the-art exploration baselines regarding final performance and sample efficiency, with maximum improvements of 37.63% and 60.25%, respectively. | Effective Exploration Based on the Structural Information Principles | [
"Xianghua Zeng",
"Hao Peng",
"Angsheng Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Bj2CpB9Dey | @inproceedings{
butler2024tangent,
title={Tangent Space Causal Inference: Leveraging Vector Fields for Causal Discovery in Dynamical Systems},
author={Kurt Butler and Daniel Waxman and Petar Djuric},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Bj2CpB9Dey}
} | Causal discovery with time series data remains a challenging yet increasingly important task across many scientific domains. Convergent cross mapping (CCM) and related methods have been proposed to study time series that are generated by dynamical systems, where traditional approaches like Granger causality are unreliable. However, CCM often yields inaccurate results depending upon the quality of the data. We propose the Tangent Space Causal Inference (TSCI) method for detecting causalities in dynamical systems. TSCI works by considering vector fields as explicit representations of the systems' dynamics and checks for the degree of synchronization between the learned vector fields. The TSCI approach is model-agnostic and can be used as a drop-in replacement for CCM and its generalizations. We first present a basic version of the TSCI algorithm, which is shown to be more effective than the basic CCM algorithm with very little additional computation. We additionally present augmented versions of TSCI that leverage the expressive power of latent variable models and deep learning. We validate our theory on standard systems, and we demonstrate improved causal inference performance across a number of benchmark tasks. | Tangent Space Causal Inference: Leveraging Vector Fields for Causal Discovery in Dynamical Systems | [
"Kurt Butler",
"Daniel Waxman",
"Petar Djuric"
] | NeurIPS.cc/2024/Conference | 2410.23499 | [
"https://github.com/KurtButler/tangentspaces"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BiikUm6pLu | @inproceedings{
jin2024truncated,
title={Truncated Variance Reduced Value Iteration},
author={Yujia Jin and Ishani Karmarkar and Aaron Sidford and Jiayi Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BiikUm6pLu}
} | We provide faster randomized algorithms for computing an $\epsilon$-optimal policy in a discounted Markov decision process with $A_{\text{tot}}$-state-action pairs, bounded rewards, and discount factor $\gamma$. We provide an $\tilde{O}(A_{\text{tot}}[(1 - \gamma)^{-3}\epsilon^{-2} + (1 - \gamma)^{-2}])$-time algorithm in the sampling setting, where the probability transition matrix is unknown but accessible through a generative model which can be queried in $\tilde{O}(1)$-time, and an $\tilde{O}(s + (1-\gamma)^{-2})$-time algorithm in the offline setting where the probability transition matrix is known and $s$-sparse. These results improve upon the prior state-of-the-art which either ran in $\tilde{O}(A_{\text{tot}}[(1 - \gamma)^{-3}\epsilon^{-2} + (1 - \gamma)^{-3}])$ time [Sidford, Wang, Wu, Ye 2018] in the sampling setting, $\tilde{O}(s + A_{\text{tot}} (1-\gamma)^{-3})$ time [Sidford, Wang, Wu, Yang, Ye 2018] in the offline setting, or time at least quadratic in the number of states using interior point methods for linear programming. We achieve our results by building upon prior stochastic variance-reduced value iteration methods [Sidford, Wang, Wu, Yang, Ye 2018]. We provide a variant that carefully truncates the progress of its iterates to improve the variance of new variance-reduced sampling procedures that we introduce to implement the steps. Our method is essentially model-free and can be implemented in $\tilde{O}(A_{\text{tot}})$-space when given generative model access. Consequently, our results take a step in closing the sample-complexity gap between model-free and model-based methods. | Truncated Variance Reduced Value Iteration | [
"Yujia Jin",
"Ishani Karmarkar",
"Aaron Sidford",
"Jiayi Wang"
] | NeurIPS.cc/2024/Conference | 2405.12952 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Bh0LLUp8OA | @inproceedings{
guruganesh2024contracting,
title={Contracting with a Learning Agent},
author={Guru Guruganesh and Yoav Kolumbus and Jon Schneider and Inbal Talgam-Cohen and Emmanouil-Vasileios Vlatakis-Gkaragkounis and Joshua Ruizhi Wang and S. Matthew Weinberg},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Bh0LLUp8OA}
} | Real-life contractual relations typically involve repeated interactions between the principal and agent, where, despite theoretical appeal, players rarely use complex dynamic strategies and instead manage uncertainty through learning algorithms.
In this paper, we initiate the study of repeated contracts with learning agents, focusing on those achieving no-regret outcomes. For the canonical setting where the agent’s actions result in success or failure, we present a simple, optimal solution for the principal: Initially provide a linear contract with scalar $\alpha > 0$, then switch to a zero-scalar contract. This shift causes the agent to “free-fall” through their action space, yielding non-zero rewards for the principal at zero cost. Interestingly, despite the apparent exploitation, there are instances where our dynamic contract can make \emph{both} players better off compared to the best static contract.
We then broaden the scope of our results to general linearly-scaled contracts, and, finally, to the best of our knowledge, we provide the first analysis of optimization against learning agents with uncertainty about the time horizon. | Contracting with a Learning Agent | [
"Guru Guruganesh",
"Yoav Kolumbus",
"Jon Schneider",
"Inbal Talgam-Cohen",
"Emmanouil-Vasileios Vlatakis-Gkaragkounis",
"Joshua Ruizhi Wang",
"S. Matthew Weinberg"
] | NeurIPS.cc/2024/Conference | 2401.16198 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BgZcuEsYU8 | @inproceedings{
levis2024causal,
title={Causal Inference in the Closed-Loop: Marginal Structural Models for Sequential Excursion Effects},
author={Alexander W. Levis and Gabriel Loewinger and Francisco Pereira},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BgZcuEsYU8}
} | Optogenetics is widely used to study the effects of neural circuit manipulation on behavior. However, the paucity of causal inference methodological work on this topic has resulted in analysis conventions that discard information, and constrain the scientific questions that can be posed. To fill this gap, we introduce a nonparametric causal inference framework for analyzing "closed-loop" designs, which use dynamic policies that assign treatment based on covariates. In this setting, standard methods can introduce bias and occlude causal effects. Building on the sequentially randomized experiments literature in causal inference, our approach extends history-restricted marginal structural models for dynamic regimes. In practice, our framework can identify a wide range of causal effects of optogenetics on trial-by-trial behavior, such as, fast/slow-acting, dose-response, additive/antagonistic, and floor/ceiling. Importantly, it does so without requiring negative controls, and can estimate how causal effect magnitudes evolve across time points. From another view, our work extends "excursion effect" methods---popular in the mobile health literature---to enable estimation of causal contrasts for treatment sequences greater than length one, in the presence of positivity violations. We derive rigorous statistical guarantees, enabling hypothesis testing of these causal effects. We demonstrate our approach on data from a recent study of dopaminergic activity on learning, and show how our method reveals relevant effects obscured in standard analyses. | Causal Inference in the Closed-Loop: Marginal Structural Models for Sequential Excursion Effects | [
"Alexander W. Levis",
"Gabriel Loewinger",
"Francisco Pereira"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BdGFgKrlHl | @inproceedings{
benomar2024addressing,
title={Addressing Bias in Online Selection with Limited Budget of Comparisons},
author={Ziyad Benomar and Evgenii Chzhen and Nicolas Schreuder and Vianney Perchet},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BdGFgKrlHl}
} | Consider a hiring process with candidates coming from different universities. It is easy to order candidates with the same background, yet it can be challenging to compare them otherwise. The latter case requires additional costly assessments, leading to a potentially high total cost for the hiring organization. Given an assigned budget, what would be an optimal strategy to select the most qualified candidate?
We model the above problem as a multicolor secretary problem, allowing comparisons between candidates from distinct groups at a fixed cost. Our study explores how the allocated budget enhances the success probability of online selection algorithms. | Addressing Bias in Online Selection with Limited Budget of Comparisons | [
"Ziyad Benomar",
"Evgenii Chzhen",
"Nicolas Schreuder",
"Vianney Perchet"
] | NeurIPS.cc/2024/Conference | 2303.09205 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BZh05P2EoN | @inproceedings{
yu2024dpic,
title={{DPIC}: Decoupling Prompt and Intrinsic Characteristics for {LLM} Generated Text Detection},
author={Xiao Yu and Yuang Qi and Kejiang Chen and Guoqiang Chen and Xi Yang and PENGYUAN ZHU and Xiuwei Shang and Weiming Zhang and Nenghai Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BZh05P2EoN}
} | Large language models (LLMs) have the potential to generate texts that pose risks of misuse, such as plagiarism, planting fake reviews on e-commerce platforms, or creating inflammatory false tweets. Consequently, detecting whether a text is generated by LLMs has become increasingly important. Existing high-quality detection methods usually require access to the interior of the model to extract the intrinsic characteristics. However, since we do not have access to the interior of the black-box model, we must resort to surrogate models, which impacts detection quality. In order to achieve high-quality detection of black-box models, we would like to extract deep intrinsic characteristics of the black-box model generated texts. We view the generation process as a coupled process of prompt and intrinsic characteristics of the generative model. Based on this insight, we propose to decouple prompt and intrinsic characteristics (DPIC) for LLM-generated text detection method. Specifically, given a candidate text, DPIC employs an auxiliary LLM to reconstruct the prompt corresponding to the candidate text, then uses the prompt to regenerate text by the auxiliary LLM, which makes the candidate text and the regenerated text align with their prompts, respectively. Then, the similarity between the candidate text and the regenerated text is used as a detection feature, thus eliminating the prompt in the detection process, which allows the detector to focus on the intrinsic characteristics of the generative model. Compared to the baselines, DPIC has achieved an average improvement of 6.76\% and 2.91\% in detecting texts from different domains generated by GPT4 and Claude3, respectively. | DPIC: Decoupling Prompt and Intrinsic Characteristics for LLM Generated Text Detection | [
"Xiao Yu",
"Yuang Qi",
"Kejiang Chen",
"Guoqiang Chen",
"Xi Yang",
"PENGYUAN ZHU",
"Xiuwei Shang",
"Weiming Zhang",
"Nenghai Yu"
] | NeurIPS.cc/2024/Conference | 2305.12519 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BZLdXBjB8O | @inproceedings{
zhang2024causaldiff,
title={CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense},
author={Mingkun Zhang and Keping Bi and Wei Chen and Quanrun Chen and Jiafeng Guo and Xueqi Cheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BZLdXBjB8O}
} | Despite ongoing efforts to defend neural classifiers from adversarial attacks, they remain vulnerable, especially to unseen attacks. In contrast, humans are difficult to be cheated by subtle manipulations, since we make judgments only based on essential factors. Inspired by this observation, we attempt to model label generation with essential label-causative factors and incorporate label-non-causative factors to assist data generation. For an adversarial example, we aim to discriminate the perturbations as non-causative factors and make predictions only based on the label-causative factors. Concretely, we propose a casual diffusion model (CausalDiff) that adapts diffusion models for conditional data generation and disentangles the two types of casual factors by learning towards a novel casual information bottleneck objective. Empirically, CausalDiff has significantly outperformed state-of-the-art defense methods on various unseen attacks, achieving an average robustness of 86.39\% (+4.01\%) on CIFAR-10, 56.25\% (+3.13\%) on CIFAR-100, and 82.62\% (+4.93\%) on GTSRB (German Traffic Sign Recognition Benchmark). | CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense | [
"Mingkun Zhang",
"Keping Bi",
"Wei Chen",
"Quanrun Chen",
"Jiafeng Guo",
"Xueqi Cheng"
] | NeurIPS.cc/2024/Conference | 2410.23091 | [
"https://github.com/cas-aisafetybasicresearchgroup/causaldiff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BUpxPo80QP | @inproceedings{
xu2024interdreamer,
title={InterDreamer: Zero-Shot Text to 3D Dynamic Human-Object Interaction},
author={Sirui Xu and Ziyin Wang and Yu-Xiong Wang and Liangyan Gui},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BUpxPo80QP}
} | Text-conditioned human motion generation has experienced significant advancements with diffusion models trained on extensive motion capture data and corresponding textual annotations. However, extending such success to 3D dynamic human-object interaction (HOI) generation faces notable challenges, primarily due to the lack of large-scale interaction data and comprehensive descriptions that align with these interactions. This paper takes the initiative and showcases the potential of generating human-object interactions without direct training on text-interaction pair data. Our key insight in achieving this is that interaction semantics and dynamics can be decoupled. Being unable to learn interaction semantics through supervised training, we instead leverage pre-trained large models, synergizing knowledge from a large language model and a text-to-motion model. While such knowledge offers high-level control over interaction semantics, it cannot grasp the intricacies of low-level interaction dynamics. To overcome this issue, we introduce a world model designed to comprehend simple physics, modeling how human actions influence object motion. By integrating these components, our novel framework, InterDreamer, is able to generate text-aligned 3D HOI sequences without relying on paired text-interaction data. We apply InterDreamer to the BEHAVE, OMOMO, and CHAIRS datasets, and our comprehensive experimental analysis demonstrates its capability to generate realistic and coherent interaction sequences that seamlessly align with the text directives. | InterDreamer: Zero-Shot Text to 3D Dynamic Human-Object Interaction | [
"Sirui Xu",
"Ziyin Wang",
"Yu-Xiong Wang",
"Liangyan Gui"
] | NeurIPS.cc/2024/Conference | 2403.19652 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BSYn7ah4KX | @inproceedings{
ren2024bias,
title={Bias Amplification in Language Model Evolution: An Iterated Learning Perspective},
author={Yi Ren and Shangmin Guo and Linlu Qiu and Bailin Wang and Danica J. Sutherland},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BSYn7ah4KX}
} | With the widespread adoption of Large Language Models (LLMs), the prevalence of iterative interactions among these models is anticipated to increase. Notably, recent advancements in multi-round on-policy self-improving methods allow LLMs to generate new examples for training subsequent models. At the same time, multi-agent LLM systems, involving automated interactions among agents, are also increasing in prominence. Thus, in both short and long terms, LLMs may actively engage in an evolutionary process. We draw parallels between the behavior of LLMs and the evolution of human culture, as the latter has been extensively studied by cognitive scientists for decades. Our approach involves leveraging Iterated Learning (IL), a Bayesian framework that elucidates how subtle biases are magnified during human cultural evolution, to explain some behaviors of LLMs. This paper outlines key characteristics of agents' behavior in the Bayesian-IL framework, including predictions that are supported by experimental verification with various LLMs. This theoretical framework could help to more effectively predict and guide the evolution of LLMs in desired directions. | Bias Amplification in Language Model Evolution: An Iterated Learning Perspective | [
"Yi Ren",
"Shangmin Guo",
"Linlu Qiu",
"Bailin Wang",
"Danica J. Sutherland"
] | NeurIPS.cc/2024/Conference | 2404.04286 | [
"https://github.com/joshua-ren/iicl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BRvGfN3Xfm | @inproceedings{
johnson2024a,
title={A Unifying Normative Framework of Decision Confidence},
author={Amelia Johnson and Michael A Buice and Koosha Khalvati},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BRvGfN3Xfm}
} | Self-assessment of one’s choices, i.e., confidence, is the topic of many decision neuroscience studies. Computational models of confidence, however, are limited to specific scenarios such as between choices with the same value. Here we present a normative framework for modeling decision confidence that is generalizable to various tasks and experimental setups. We further drive the implications of our model from both theoretical and experimental points of view. Specifically, we show that our model maps to the planning as an inference framework where the objective function is maximizing the gained reward and information entropy of the policy. Moreover, we validate our model on two different psychophysics experiments and show its superiority over other approaches in explaining subjects' confidence reports. | A Unifying Normative Framework of Decision Confidence | [
"Amelia Johnson",
"Michael A Buice",
"Koosha Khalvati"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BRZYhVHvSg | @inproceedings{
oesterling2024multigroup,
title={Multi-Group Proportional Representation in Retrieval},
author={Alex Oesterling and Claudio Mayrink Verdun and Alexander Glynn and Carol Xuan Long and Lucas Monteiro Paes and Sajani Vithana and Martina Cardone and Flavio Calmon},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BRZYhVHvSg}
} | Image search and retrieval tasks can perpetuate harmful stereotypes, erase cultural identities, and amplify social disparities. Current approaches to mitigate these representational harms balance the number of retrieved items across population groups defined by a small number of (often binary) attributes. However, most existing methods overlook intersectional groups determined by combinations of
group attributes, such as gender, race, and ethnicity. We introduce Multi-Group Proportional Representation (MPR), a novel metric that measures representation across intersectional groups. We develop practical methods for estimating MPR, provide theoretical guarantees, and propose optimization algorithms to ensure MPR in retrieval. We demonstrate that existing methods optimizing for equal and proportional representation metrics may fail to promote MPR. Crucially, our work shows that optimizing MPR yields more proportional representation across multiple intersectional groups specified by a rich function class, often with minimal compromise in retrieval accuracy. Code is provided at https://github.com/alex-oesterling/multigroup-proportional-representation. | Multi-Group Proportional Representation in Retrieval | [
"Alex Oesterling",
"Claudio Mayrink Verdun",
"Alexander Glynn",
"Carol Xuan Long",
"Lucas Monteiro Paes",
"Sajani Vithana",
"Martina Cardone",
"Flavio Calmon"
] | NeurIPS.cc/2024/Conference | 2407.08571 | [
"https://github.com/alex-oesterling/multigroup-proportional-representation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BRW0MKJ7Rr | @inproceedings{
wiltzer2024action,
title={Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning},
author={Harley Wiltzer and Marc G Bellemare and David Meger and Patrick Shafto and Yash Jhaveri},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BRW0MKJ7Rr}
} | When decisions are made at high frequency, traditional reinforcement learning (RL) methods struggle to accurately estimate action values. In turn, their performance is inconsistent and often poor. Whether the performance of distributional RL (DRL) agents suffers similarly, however, is unknown. In this work, we establish that DRL agents *are* sensitive to the decision frequency. We prove that action-conditioned return distributions collapse to their underlying policy's return distribution as the decision frequency increases. We quantify the rate of collapse of these return distributions and exhibit that their statistics collapse at different rates. Moreover, we define distributional perspectives on action gaps and advantages. In particular, we introduce the *superiority* as a probabilistic generalization of the advantage---the core object of approaches to mitigating performance issues in high-frequency value-based RL. In addition, we build a superiority-based DRL algorithm. Through simulations in an option-trading domain, we validate that proper modeling of the superiority distribution produces improved controllers at high decision frequencies. | Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning | [
"Harley Wiltzer",
"Marc G Bellemare",
"David Meger",
"Patrick Shafto",
"Yash Jhaveri"
] | NeurIPS.cc/2024/Conference | 2410.11022 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BROvXhmzYK | @inproceedings{
zhou2024selfdiscover,
title={{SELF}-{DISCOVER}: Large Language Models Self-Compose Reasoning Structures},
author={Pei Zhou and Jay Pujara and Xiang Ren and Xinyun Chen and Heng-Tze Cheng and Quoc V Le and Ed H. Chi and Denny Zhou and Swaroop Mishra and Steven Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BROvXhmzYK}
} | We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2’s performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns. | SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures | [
"Pei Zhou",
"Jay Pujara",
"Xiang Ren",
"Xinyun Chen",
"Heng-Tze Cheng",
"Quoc V Le",
"Ed H. Chi",
"Denny Zhou",
"Swaroop Mishra",
"Steven Zheng"
] | NeurIPS.cc/2024/Conference | 2402.03620 | [
""
] | https://huggingface.co/papers/2402.03620 | 2 | 109 | 10 | 10 | [] | [] | [
"kailashsp/SELF-DISCOVER"
] | [] | [] | [
"kailashsp/SELF-DISCOVER"
] | 1 | poster |
null | https://openreview.net/forum?id=BQh1SGvROG | @inproceedings{
xu2024adanca,
title={Adan{CA}: Neural Cellular Automata As Adaptors For More Robust Vision Transformer},
author={Yitao Xu and Tong Zhang and Sabine Susstrunk},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BQh1SGvROG}
} | Vision Transformers (ViTs) demonstrate remarkable performance in image classification through visual-token interaction learning, particularly when equipped with local information via region attention or convolutions. Although such architectures improve the feature aggregation from different granularities, they often fail to contribute to the robustness of the networks. Neural Cellular Automata (NCA) enables the modeling of global visual-token representations through local interactions, with its training strategies and architecture design conferring strong generalization ability and robustness against noisy input. In this paper, we propose Adaptor Neural Cellular Automata (AdaNCA) for Vision Transformers that uses NCA as plug-and-play adaptors between ViT layers, thus enhancing ViT's performance and robustness against adversarial samples as well as out-of-distribution inputs. To overcome the large computational overhead of standard NCAs, we propose Dynamic Interaction for more efficient interaction learning. Using our analysis of AdaNCA placement and robustness improvement, we also develop an algorithm for identifying the most effective insertion points for AdaNCA. With less than a 3% increase in parameters, AdaNCA contributes to more than 10% absolute improvement in accuracy under adversarial attacks on the ImageNet1K benchmark. Moreover, we demonstrate with extensive evaluations across eight robustness benchmarks and four ViT architectures that AdaNCA, as a plug-and-play module, consistently improves the robustness of ViTs. | AdanCA: Neural Cellular Automata As Adaptors For More Robust Vision Transformer | [
"Yitao Xu",
"Tong Zhang",
"Sabine Susstrunk"
] | NeurIPS.cc/2024/Conference | 2406.08298 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BOtjMacACI | @inproceedings{
dong2024efficient,
title={Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformation},
author={Wei Dong and Yuan Sun and Yiting Yang and Xing Zhang and Zhijun Lin and Qingsen Yan and Haokui Zhang and Peng Wang and Yang Yang and Heng Tao Shen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BOtjMacACI}
} | A common strategy for Parameter-Efficient Fine-Tuning (PEFT) of pre-trained Vision Transformers (ViTs) involves adapting the model to downstream tasks by learning a low-rank adaptation matrix. This matrix is decomposed into a product of down-projection and up-projection matrices, with the bottleneck dimensionality being crucial for reducing the number of learnable parameters, as exemplified by prevalent methods like LoRA and Adapter. However, these low-rank strategies typically employ a fixed bottleneck dimensionality, which limits their flexibility in handling layer-wise variations. To address this limitation, we propose a novel PEFT approach inspired by Singular Value Decomposition (SVD) for representing the adaptation matrix. SVD decomposes a matrix into the product of a left unitary matrix, a diagonal matrix of scaling values, and a right unitary matrix. We utilize Householder transformations to construct orthogonal matrices that efficiently mimic the unitary matrices, requiring only a vector. The diagonal values are learned in a layer-wise manner, allowing them to flexibly capture the unique properties of each layer. This approach enables the generation of adaptation matrices with varying ranks across different layers, providing greater flexibility in adapting pre-trained models. Experiments on standard downstream vision tasks demonstrate that our method achieves promising fine-tuning performance. | Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformation | [
"Wei Dong",
"Yuan Sun",
"Yiting Yang",
"Xing Zhang",
"Zhijun Lin",
"Qingsen Yan",
"Haokui Zhang",
"Peng Wang",
"Yang Yang",
"Heng Tao Shen"
] | NeurIPS.cc/2024/Conference | 2410.22952 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BOrut7M2X7 | @inproceedings{
janati2024divideandconquer,
title={Divide-and-Conquer Posterior Sampling for Denoising Diffusion priors},
author={Yazid Janati and Badr MOUFAD and Alain Oliviero Durmus and Eric Moulines and Jimmy Olsson},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BOrut7M2X7}
} | Recent advancements in solving Bayesian inverse problems have spotlighted denoising diffusion models (DDMs) as effective priors.
Although these have great potential, DDM priors yield complex posterior distributions that are challenging to sample from.
Existing approaches to posterior sampling in this context address this problem either by retraining model-specific components, leading to stiff and cumbersome methods, or by introducing approximations with uncontrolled errors that affect the accuracy of the produced samples.
We present an innovative framework, divide-and-conquer posterior sampling, which leverages the inherent structure of DDMs to construct a sequence of intermediate posteriors that guide the produced samples to the target posterior.
Our method significantly reduces the approximation error associated with current techniques without the need for retraining.
We demonstrate the versatility and effectiveness of our approach for a wide range of Bayesian inverse problems.
The code is available at \url{https://github.com/Badr-MOUFAD/dcps} | Divide-and-Conquer Posterior Sampling for Denoising Diffusion priors | [
"Yazid Janati",
"Badr MOUFAD",
"Alain Oliviero Durmus",
"Eric Moulines",
"Jimmy Olsson"
] | NeurIPS.cc/2024/Conference | 2403.11407 | [
"https://github.com/badr-moufad/dcps"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BOhnXyIPWW | @inproceedings{
zhou2024locally,
title={Locally Private and Robust Multi-Armed Bandits},
author={Xingyu Zhou and WEI ZHANG},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BOhnXyIPWW}
} | We study the interplay between local differential privacy (LDP) and robustness to Huber corruption and possibly heavy-tailed rewards in the context of multi-armed bandits (MABs). We consider two different practical settings: LDP-then-Corruption (LTC) where each user's locally private response might be further corrupted during the data collection process, and Corruption-then-LDP (CTL) where each user's raw data may be corrupted such that the LDP mechanism will only be applied to the corrupted data. To start with, we present the first tight characterization of the mean estimation error in high probability under both LTC and CTL settings. Leveraging this new result, we then present an almost tight characterization (up to log factor) of the minimax regret in online MABs and sub-optimality in offline MABs under both LTC and CTL settings, respectively. Our theoretical results in both settings are also corroborated by a set of systematic simulations. One key message in this paper is that LTC is a more difficult setting that leads to a worse performance guarantee compared to the CTL setting (in the minimax sense). Our sharp understanding of LTC and CTL also naturally allows us to give the first tight performance bounds for the most practical setting where corruption could happen both before and after the LDP mechanism.
As an important by-product, we also give the first correct and tight regret bound for locally private and heavy-tailed online MABs, i.e., without Huber corruption, by identifying a fundamental flaw in the state-of-the-art. | Locally Private and Robust Multi-Armed Bandits | [
"Xingyu Zhou",
"WEI ZHANG"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BNnZwbZGpm | @inproceedings{
li2024provably,
title={Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling},
author={Junyi Li and Heng Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BNnZwbZGpm}
} | Bilevel Optimization has experienced significant advancements recently with the introduction of new efficient algorithms. Mirroring the success in single-level optimization, stochastic gradient-based algorithms are widely used in bilevel optimization. However, a common limitation in these algorithms is the presumption of independent sampling, which can lead to increased computational costs due to the unique hyper-gradient structure in bilevel problems. To address this challenge, we study the example-selection strategy for bilevel optimization in this work. More specifically, we introduce a without-replacement sampling based algorithm which achieves a faster convergence rate compared to its counterparts that rely on independent sampling. Beyond the standard bilevel optimization formulation, we extend our discussion to conditional bilevel optimization and also two special cases: minimax and compositional optimization. Finally, we validate our algorithms over both synthetic and real-world applications. Numerical results clearly showcase the superiority of our algorithms. | Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling | [
"Junyi Li",
"Heng Huang"
] | NeurIPS.cc/2024/Conference | 2411.05868 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BJv1t4XNJW | @inproceedings{
jiang2024slot,
title={Slot State Space Models},
author={Jindong Jiang and Fei Deng and Gautam Singh and Minseung Lee and Sungjin Ahn},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BJv1t4XNJW}
} | Recent State Space Models (SSMs) such as S4, S5, and Mamba have shown remarkable computational benefits in long-range temporal dependency modeling. However, in many sequence modeling problems, the underlying process is inherently modular and it is of interest to have inductive biases that mimic this modular structure. In this paper, we introduce SlotSSMs, a novel framework for incorporating independent mechanisms into SSMs to preserve or encourage separation of information. Unlike conventional SSMs that maintain a monolithic state vector, SlotSSMs maintains the state as a collection of multiple vectors called slots. Crucially, the state transitions are performed independently per slot with sparse interactions across slots implemented via the bottleneck of self-attention. In experiments, we evaluate our model in object-centric learning, 3D visual reasoning, and long-context video understanding tasks, which involve modeling multiple objects and their long-range temporal dependencies. We find that our proposed design offers substantial performance gains over existing sequence modeling methods. Project page is available at \url{https://slotssms.github.io/} | Slot State Space Models | [
"Jindong Jiang",
"Fei Deng",
"Gautam Singh",
"Minseung Lee",
"Sungjin Ahn"
] | NeurIPS.cc/2024/Conference | 2406.12272 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BJrBaLoDRJ | @inproceedings{
jiang2024a,
title={A robust inlier identification algorithm for point cloud registration via \${\textbackslash}mathbf\{{\textbackslash}ell\_0\}\$-minimization},
author={Yinuo Jiang and Tang Xiuchuan and Cheng Cheng and Ye Yuan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BJrBaLoDRJ}
} | Correspondences in point cloud registration are prone to outliers, significantly reducing registration accuracy and highlighting the need for precise inlier identification. In this paper, we propose a robust inlier identification algorithm for point cloud registration by reformulating the conventional registration problem as an alignment error $\ell_0$-minimization problem. The $\ell_0$-minimization problem is formulated for each local set, where those local sets are built on a compatibility graph of input correspondences. To resolve the $\ell_0$-minimization, we develop a novel two-stage decoupling strategy, which first decouples the alignment error into a rotation fitting error and a translation fitting error. Second, null-space matrices are employed to decouple inlier identification from the estimation of rotation and translation respectively, thereby applying Bayesian theory to $\ell_0$-minimization problems and solving for fitting errors. Correspondences with the smallest errors are identified as inliers to generate a transformation hypothesis for each local set. The best hypothesis is selected to perform registration. We demonstrate that the proposed inlier identification algorithm is robust under high outlier ratios and noise through experiments. Extensive results on the KITTI, 3DMatch, and 3DLoMatch datasets demonstrate that our method achieves state-of-the-art performance compared to both traditional and learning-based methods in various indoor and outdoor scenes. | A robust inlier identification algorithm for point cloud registration via ℓ_0-minimization | [
"Yinuo Jiang",
"Tang Xiuchuan",
"Cheng Cheng",
"Ye Yuan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BJndYScO6o | @inproceedings{
pan2024modelbased,
title={Model-based Diffusion for Trajectory Optimization},
author={Chaoyi Pan and Zeji Yi and Guanya Shi and Guannan Qu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BJndYScO6o}
} | Recent advances in diffusion models have demonstrated their strong capabilities in generating high-fidelity samples from complex distributions through an iterative refinement process. Despite the empirical success of diffusion models in motion planning and control, the model-free nature of these methods does not leverage readily available model information and limits their generalization to new scenarios beyond the training data (e.g., new robots with different dynamics). In this work, we introduce Model-Based Diffusion (MBD), an optimization approach using the diffusion process to solve trajectory optimization (TO) problems without data. The key idea is to explicitly compute the score function by leveraging the model information in TO problems, which is why we refer to our approach as model-based diffusion. Moreover, although MBD does not require external data, it can be naturally integrated with data of diverse qualities to steer the diffusion process. We also reveal that MBD has interesting connections to sampling-based optimization. Empirical evaluations show that MBD outperforms state-of-the-art reinforcement learning and sampling-based TO methods in challenging contact-rich tasks. Additionally, MBD’s ability to integrate with data enhances its versatility and practical applicability, even with imperfect and infeasible data (e.g., partial-state demonstrations for high-dimensional humanoids), beyond the scope of standard diffusion models. Videos and codes are available in the supplementary materials. | Model-based Diffusion for Trajectory Optimization | [
"Chaoyi Pan",
"Zeji Yi",
"Guanya Shi",
"Guannan Qu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BJ6HkT7qIk | @inproceedings{
laan2024selfcalibrating,
title={Self-Calibrating Conformal Prediction},
author={Lars van der Laan and Ahmed Alaa},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BJ6HkT7qIk}
} | In machine learning, model calibration and predictive inference are essential for producing reliable predictions and quantifying uncertainty to support decision-making. Recognizing the complementary roles of point and interval predictions, we introduce Self-Calibrating Conformal Prediction, a method that combines Venn-Abers calibration and conformal prediction to deliver calibrated point predictions alongside prediction intervals with finite-sample validity conditional on these predictions. To achieve this, we extend the original Venn-Abers procedure from binary classification to regression. Our theoretical framework supports analyzing conformal prediction methods that involve calibrating model predictions and subsequently constructing conditionally valid prediction intervals on the same data, where the conditioning set or conformity scores may depend on the calibrated predictions. Real-data experiments show that our method improves interval efficiency through model calibration and offers a practical alternative to feature-conditional validity. | Self-Calibrating Conformal Prediction | [
"Lars van der Laan",
"Ahmed Alaa"
] | NeurIPS.cc/2024/Conference | 2402.07307 | [
"https://github.com/larsvanderlaan/selfcalibratingconformal"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BGOGknwHbi | @inproceedings{
iklassov2024selfguiding,
title={Self-Guiding Exploration for Combinatorial Problems},
author={Zangir Iklassov and Yali Du and Farkhad Akimov and Martin Tak{\'a}{\v{c}}},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BGOGknwHbi}
} | Large Language Models (LLMs) have become pivotal in addressing reasoning tasks across diverse domains, including arithmetic, commonsense, and symbolic reasoning. They utilize prompting techniques such as Exploration-of-Thought, Decomposition, and Refinement to effectively navigate and solve intricate tasks. Despite these advancements, the application of LLMs to Combinatorial Problems (CPs), known for their NP-hardness and critical roles in logistics and resource management remains underexplored. To address this gap, we introduce a novel prompting strategy: Self-Guiding Exploration (SGE), designed to enhance the performance of solving CPs. SGE operates autonomously, generating multiple thought trajectories for each CP task. It then breaks these trajectories down into actionable subtasks, executes them sequentially, and refines the results to ensure optimal outcomes. We present our research as the first to apply LLMs to a broad range of CPs and demonstrate that SGE outperforms existing prompting strategies by over 27.84% in CP optimization performance. Additionally, SGE achieves a 2.46% higher accuracy over the best existing results in other reasoning tasks (arithmetic, commonsense, and symbolic). | Self-Guiding Exploration for Combinatorial Problems | [
"Zangir Iklassov",
"Yali Du",
"Farkhad Akimov",
"Martin Takáč"
] | NeurIPS.cc/2024/Conference | 2405.17950 | [
"https://github.com/zangir/llm-for-cp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BFWdIPPLgZ | @inproceedings{
cui2024a,
title={A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention},
author={Hugo Cui and Freya Behrens and Florent Krzakala and Lenka Zdeborova},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BFWdIPPLgZ}
} | Many empirical studies have provided evidence for the emergence of algorithmic mechanisms (abilities) in the learning of language models, that lead to qualitative improvements of the model capabilities. Yet, a theoretical characterization of how such mechanisms emerge remains elusive. In this paper, we take a step in this direction by providing a tight theoretical analysis of the emergence of semantic attention in a solvable model of dot-product attention. More precisely, we consider a non-linear self-attention layer with trainable tied and low-rank query and key matrices. In the asymptotic limit of high-dimensional data and a comparably large number of training samples we provide a tight closed-form characterization of the global minimum of the non-convex empirical loss landscape. We show that this minimum corresponds to either a positional attention mechanism (with tokens attending to each other based on their respective positions) or a semantic attention mechanism (with tokens attending to each other based on their meaning), and evidence an emergent phase transition from the former to the latter with increasing sample complexity. Finally, we compare the dot-product attention layer to a linear positional baseline, and show that it outperforms the latter using the semantic mechanism provided it has access to sufficient data. | A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention | [
"Hugo Cui",
"Freya Behrens",
"Florent Krzakala",
"Lenka Zdeborova"
] | NeurIPS.cc/2024/Conference | 2402.03902 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=BEiqNQZIky | @inproceedings{
ren2024efficiently,
title={Efficiently Learning Significant Fourier Feature Pairs for Statistical Independence Testing},
author={Yixin Ren and Yewei Xia and Hao Zhang and Jihong Guan and Shuigeng Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BEiqNQZIky}
} | We propose a novel method to efficiently learn significant Fourier feature pairs for maximizing the power of Hilbert-Schmidt Independence Criterion~(HSIC) based independence tests. We first reinterpret HSIC in the frequency domain, which reveals its limited discriminative power due to the inability to adapt to specific frequency-domain features under the current inflexible configuration. To remedy this shortcoming, we introduce a module of learnable Fourier features, thereby developing a new criterion. We then derive a finite sample estimate of the test power by modeling the behavior of the criterion, thus formulating an optimization objective for significant Fourier feature pairs learning. We show that this optimization objective can be computed in linear time (with respect to the sample size $n$), which ensures fast independence tests. We also prove the convergence property of the optimization objective and establish the consistency of the independence tests. Extensive empirical evaluation on both synthetic and real datasets validates our method's superiority in effectiveness and efficiency, particularly in handling high-dimensional data and dealing with large-scale scenarios. | Efficiently Learning Significant Fourier Feature Pairs for Statistical Independence Testing | [
"Yixin Ren",
"Yewei Xia",
"Hao Zhang",
"Jihong Guan",
"Shuigeng Zhou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BDrWQTrfyI | @inproceedings{
zhang2024bam,
title={{BAM}! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts},
author={Qizhen Zhang and Nikolas Gritsch and Dwaraknath Gnaneshwar and Simon Guo and David Cairuz and Bharat Venkitesh and Jakob Nicolaus Foerster and Phil Blunsom and Sebastian Ruder and Ahmet {\"U}st{\"u}n and Acyr Locatelli},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BDrWQTrfyI}
} | Mixture of Experts (MoE) framework has become a popular architecture for large language models due to its superior performance compared to dense models. However, training MoEs from scratch in a large-scale regime is prohibitively expensive. Previous work addresses this challenge by independently training multiple dense expert models and using them to initialize an MoE. In particular, state-of-the-art approaches initialize MoE layers using experts' feed-forward parameters while merging all other parameters, limiting the advantages of the specialized dense models when upcycling them as MoEs. We propose BAM (Branch-Attend-Mix), a simple yet effective improvement to MoE training. BAM makes full use of specialized dense models by not only using their feed-forward network (FFN) to initialize the MoE layers but also leveraging experts' attention weights fully by leveraging them as mixture-of-attention (MoA) layers. We explore two methods for upcycling MoA layers: 1) initializing separate attention experts from dense models including key, value, and query matrices; and 2) initializing only Q projections while sharing key-value pairs across all experts to facilitate efficient inference. Our experiments using seed models ranging from 590 million to 2 billion parameters show that our approach outperforms state-of-the-art approaches under the same data and compute budget in both perplexity and downstream tasks evaluations, confirming the effectiveness of BAM. | BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts | [
"Qizhen Zhang",
"Nikolas Gritsch",
"Dwaraknath Gnaneshwar",
"Simon Guo",
"David Cairuz",
"Bharat Venkitesh",
"Jakob Nicolaus Foerster",
"Phil Blunsom",
"Sebastian Ruder",
"Ahmet Üstün",
"Acyr Locatelli"
] | NeurIPS.cc/2024/Conference | 2408.08274 | [
""
] | https://huggingface.co/papers/2408.08274 | 0 | 12 | 3 | 11 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=BCA9NMZkLS | @inproceedings{
samuel2024berts,
title={{BERT}s are Generative In-Context Learners},
author={David Samuel},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BCA9NMZkLS}
} | While in-context learning is commonly associated with causal language models, such as GPT, we demonstrate that this capability also 'emerges' in masked language models. Through an embarrassingly simple inference technique, we enable an existing masked model, DeBERTa, to perform generative tasks without additional training or architectural changes. Our evaluation reveals that the masked and causal language models behave very differently, as they clearly outperform each other on different categories of tasks. These complementary strengths suggest that the field's focus on causal models for in-context learning may be limiting – both architectures can develop these capabilities, but with distinct advantages; pointing toward promising hybrid approaches that combine the strengths of both objectives. | BERTs are Generative In-Context Learners | [
"David Samuel"
] | NeurIPS.cc/2024/Conference | 2406.04823 | [
"https://github.com/ltgoslo/bert-in-context"
] | https://huggingface.co/papers/2406.04823 | 0 | 0 | 0 | 1 | [
"ltg/deberta-xxlarge-fixed"
] | [] | [] | [
"ltg/deberta-xxlarge-fixed"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=BAmAFraxvf | @inproceedings{
tafasca2024toward,
title={Toward Semantic Gaze Target Detection},
author={Samy Tafasca and Anshul Gupta and Victor Bros and Jean-marc Odobez},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BAmAFraxvf}
} | From the onset of infanthood, humans naturally develop the ability to closely observe and interpret the visual gaze of others. This skill, known as gaze following, holds significance in developmental theory as it enables us to grasp another person’s mental state, emotions, intentions, and more. In computer vision, gaze following is defined as the prediction of the pixel coordinates where a person in the image is focusing their attention. Existing methods in this research area have predominantly centered on pinpointing the gaze target by predicting a gaze heatmap or gaze point. However, a notable drawback of this approach is its limited practical value in gaze applications, as mere localization may not fully capture our primary interest — understanding the underlying semantics, such as the nature of the gaze target, rather than just its 2D pixel location. To address this gap, we extend the gaze following task, and introduce a novel architecture that simultaneously predicts the localization and semantic label of the gaze target. We devise a pseudo-annotation pipeline for the GazeFollow dataset, propose a new benchmark, develop an experimental protocol and design a suitable baseline for comparison. Our method sets a new state-of-the-art on the main GazeFollow benchmark for localization and achieves competitive results in the recognition task on both datasets compared to the baseline, with 40% fewer parameters | Toward Semantic Gaze Target Detection | [
"Samy Tafasca",
"Anshul Gupta",
"Victor Bros",
"Jean-marc Odobez"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BAjjINf0Oh | @inproceedings{
block2024oracleefficient,
title={Oracle-Efficient Differentially Private Learning with Public Data},
author={Adam Block and Mark Bun and Rathin Desai and Abhishek Shetty and Steven Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BAjjINf0Oh}
} | Due to statistical lower bounds on the learnability of many function classes under privacy constraints, there has been recent interest in leveraging public data to improve the performance of private learning algorithms. In this model, algorithms must always guarantee differential privacy with respect to the private samples while also ensuring learning guarantees when the private data distribution is sufficiently close to that of the public data. Previous work has demonstrated that when sufficient public, unlabelled data is available, private learning can be made statistically tractable, but the resulting algorithms have all been computationally inefficient. In this work, we present the first computationally efficient, algorithms to provably leverage public data to learn privately whenever a function class is learnable non-privately, where our notion of computational efficiency is with respect to the number of calls to an optimization oracle for the function class. In addition to this general result, we provide specialized algorithms with improved sample complexities in the special cases when the function class is convex or when the task is binary classification. | Oracle-Efficient Differentially Private Learning with Public Data | [
"Adam Block",
"Mark Bun",
"Rathin Desai",
"Abhishek Shetty",
"Steven Wu"
] | NeurIPS.cc/2024/Conference | 2402.09483 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BAfKBkr8IP | @inproceedings{
yang2024rethinking,
title={Rethinking Fourier Transform from A Basis Functions Perspective for Long-term Time Series Forecasting},
author={Runze Yang and Longbing Cao and JIE YANG and li jianxun},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=BAfKBkr8IP}
} | The interaction between Fourier transform and deep learning opens new avenues for long-term time series forecasting (LTSF). We propose a new perspective to reconsider the Fourier transform from a basis functions perspective. Specifically, the real and imaginary parts of the frequency components can be viewed as the coefficients of cosine and sine basis functions at tiered frequency levels, respectively. We argue existing Fourier-based methods do not involve basis functions thus fail to interpret frequency coefficients precisely and consider the time-frequency relationship sufficiently, leading to inconsistent starting cycles and inconsistent series length issues. Accordingly, a novel Fourier basis mapping (FBM) method addresses these issues by mixing time and frequency domain features through Fourier basis expansion. Differing from existing approaches, FBM (i) embeds the discrete Fourier transform with basis functions, and then (ii) can enable plug-and-play in various types of neural networks for better performance. FBM extracts explicit frequency features while preserving temporal characteristics, enabling the mapping network to capture the time-frequency relationships. By incorporating our unique time-frequency features, the FBM variants can enhance any type of networks like linear, multilayer-perceptron-based, transformer-based, and Fourier-based networks, achieving state-of-the-art LTSF results on diverse real-world datasets with just one or three fully connected layers. The code is available at: https://github.com/runze1223/Fourier-Basis-Mapping. | Rethinking Fourier Transform from A Basis Functions Perspective for Long-term Time Series Forecasting | [
"Runze Yang",
"Longbing Cao",
"JIE YANG",
"li jianxun"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=B9qg3wo75g | @inproceedings{
nobis2024generative,
title={Generative Fractional Diffusion Models},
author={Gabriel Nobis and Maximilian Springenberg and Marco Aversa and Michael Detzel and Rembert Daems and Roderick Murray-Smith and Shinichi Nakajima and Sebastian Lapuschkin and Stefano Ermon and Tolga Birdal and Manfred Opper and Christoph Knochenhauer and Luis Oala and Wojciech Samek},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B9qg3wo75g}
} | We introduce the first continuous-time score-based generative model that leverages fractional diffusion processes for its underlying dynamics. Although diffusion models have excelled at capturing data distributions, they still suffer from various limitations such as slow convergence, mode-collapse on imbalanced data, and lack of diversity. These issues are partially linked to the use of light-tailed Brownian motion (BM) with independent increments. In this paper, we replace BM with an approximation of its non-Markovian counterpart, fractional Brownian motion (fBM), characterized by correlated increments and Hurst index $H \in (0,1)$, where $H=0.5$ recovers the classical BM. To ensure tractable inference and learning, we employ a recently popularized Markov approximation of fBM (MA-fBM) and derive its reverse-time model, resulting in *generative fractional diffusion models* (GFDM). We characterize the forward dynamics using a continuous reparameterization trick and propose *augmented score matching* to efficiently learn the score function, which is partly known in closed form, at minimal added cost. The ability to drive our diffusion model via MA-fBM offers flexibility and control. $H \leq 0.5$ enters the regime of *rough paths* whereas $H>0.5$ regularizes diffusion paths and invokes long-term memory. The Markov approximation allows added control by varying the number of Markov processes linearly combined to approximate fBM. Our evaluations on real image datasets demonstrate that GFDM achieves greater pixel-wise diversity and enhanced image quality, as indicated by a lower FID, offering a promising alternative to traditional diffusion models | Generative Fractional Diffusion Models | [
"Gabriel Nobis",
"Maximilian Springenberg",
"Marco Aversa",
"Michael Detzel",
"Rembert Daems",
"Roderick Murray-Smith",
"Shinichi Nakajima",
"Sebastian Lapuschkin",
"Stefano Ermon",
"Tolga Birdal",
"Manfred Opper",
"Christoph Knochenhauer",
"Luis Oala",
"Wojciech Samek"
] | NeurIPS.cc/2024/Conference | 2310.17638 | [
"https://github.com/gabrielnobis/gfdm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=B9FPPdNmyk | @inproceedings{
zhang2024the,
title={The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection},
author={Qingyang Zhang and Qiuxuan Feng and Joey Tianyi Zhou and Yatao Bian and Qinghua Hu and Changqing Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B9FPPdNmyk}
} | Out-of-distribution (OOD) detection is essential for model trustworthiness which aims to sensitively identity semantic OOD samples and robustly generalize for covariate-shifted OOD samples. However, we discover that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability. The classification accuracy frequently collapses catastrophically when even slight noise is encountered. Such a phenomenon violates the motivation of trustworthiness and significantly limits the model's deployment in the real world. What is the hidden reason behind such a limitation? In this work, we theoretically demystify the "\textit{sensitive-robust}" dilemma that lies in previous OOD detection methods. Consequently, a theory-inspired algorithm is induced to overcome such a dilemma. By decoupling the uncertainty learning objective from a Bayesian perspective, the conflict between OOD detection and OOD generalization is naturally harmonized and a dual-optimized performance could be expected. Empirical studies show that our method achieves superior performance on commonly used benchmarks. To our best knowledge, this work is the first principled OOD detection method that achieves state-of-the-art OOD detection performance without sacrificing OOD generalization ability. Our code is available at https://github.com/QingyangZhang/DUL. | The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection | [
"Qingyang Zhang",
"Qiuxuan Feng",
"Joey Tianyi Zhou",
"Yatao Bian",
"Qinghua Hu",
"Changqing Zhang"
] | NeurIPS.cc/2024/Conference | 2410.11576 | [
"https://github.com/qingyangzhang/dul"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=B7S4jJGlvl | @inproceedings{
grayeli2024symbolic,
title={Symbolic Regression with a Learned Concept Library},
author={Arya Grayeli and Atharva Sehgal and Omar Costilla Reyes and Miles Cranmer and Swarat Chaudhuri},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B7S4jJGlvl}
} | We present a novel method for symbolic regression (SR), the task of searching for compact programmatic hypotheses that best explain a dataset. The problem is commonly solved using genetic algorithms; we show that we can enhance such methods by inducing a library of abstract textual concepts. Our algorithm, called LaSR,
uses zero-shot queries to a large language model (LLM) to discover and evolve concepts occurring in known high-performing hypotheses. We discover new hypotheses using a mix of standard evolutionary steps and LLM-guided steps (obtained through zero-shot LLM queries) conditioned on discovered concepts. Once discovered, hypotheses are used in a new round of concept abstraction and evolution. We validate LaSR on the Feynman equations, a popular SR benchmark,
as well as a set of synthetic tasks. On these benchmarks, LaSR substantially outperforms a variety of state-of-the-art SR approaches based on deep learning and evolutionary algorithms. Moreover, we show that LASR can be used to discover a new and powerful scaling law for LLMs. | Symbolic Regression with a Learned Concept Library | [
"Arya Grayeli",
"Atharva Sehgal",
"Omar Costilla Reyes",
"Miles Cranmer",
"Swarat Chaudhuri"
] | NeurIPS.cc/2024/Conference | 2409.09359 | [
"https://github.com/trishullab/LibraryAugmentedSymbolicRegression.jl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=B74mb0tEY6 | @inproceedings{
baudry2024optimizing,
title={Optimizing the coalition gain in Online Auctions with Greedy Structured Bandits},
author={Dorian Baudry and Hugo Richard and Maria Cherifa and Vianney Perchet and Cl{\'e}ment Calauz{\`e}nes},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B74mb0tEY6}
} | Motivated by online display advertising, this work considers repeated second-price auctions, where agents sample their value from an unknown distribution with cumulative distribution function $F$. In each auction $t$, a decision-maker bound by limited observations selects $n_t$ agents from a coalition of $N$ to compete for a prize with $p$ other agents, aiming to maximize the cumulative reward of the coalition across all auctions.
The problem is framed as an $N$-armed structured bandit, each number of player sent being an arm $n$, with expected reward $r(n)$ fully characterized by $F$ and $p+n$.
We present two algorithms, Local-Greedy (LG) and Greedy-Grid (GG), both achieving *constant* problem-dependent regret. This relies on three key ingredients: **1.** an estimator of $r(n)$ from feedback collected from any arm $k$, **2.** concentration bounds of these estimates for $k$ within an estimation neighborhood of $n$ and **3.** the unimodality property of $r$ under standard assumptions on $F$. Additionally, GG exhibits problem-independent guarantees on top of best problem-dependent guarantees. However, by avoiding to rely on confidence intervals, LG practically outperforms GG, as well as standard unimodal bandit algorithms such as OSUB or multi-armed bandit algorithms. | Optimizing the coalition gain in Online Auctions with Greedy Structured Bandits | [
"Dorian Baudry",
"Hugo Richard",
"Maria Cherifa",
"Vianney Perchet",
"Clément Calauzènes"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=B5vQ7IQW7d | @inproceedings{
wang2024model,
title={Model Sensitivity Aware Continual Learning},
author={Zhenyi Wang and Heng Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B5vQ7IQW7d}
} | Continual learning (CL) aims to adapt to non-stationary data distributions while retaining previously acquired knowledge. However, CL models typically face a trade-off between preserving old task knowledge and excelling in new task performance. Existing approaches often sacrifice one for the other. To overcome this limitation, orthogonal to existing approaches, we propose a novel perspective that views the CL model ability in preserving old knowledge and performing well in new task as a matter of model sensitivity to parameter updates. \textit{Excessive} parameter sensitivity can lead to two drawbacks: (1) significant forgetting of previous knowledge; and (2) overfitting to new tasks. To reduce parameter sensitivity, we optimize the model's performance based on the parameter distribution, which achieves the worst-case CL performance within a distribution neighborhood. This innovative learning paradigm offers dual benefits: (1) reduced forgetting of old knowledge by mitigating drastic changes in model predictions under small parameter updates; and (2) enhanced new task performance by preventing overfitting to new tasks. Consequently, our method achieves superior ability in retaining old knowledge and achieving excellent new task performance simultaneously.
Importantly, our approach is compatible with existing CL methodologies, allowing seamless integration while delivering significant improvements in effectiveness, efficiency, and versatility with both theoretical and empirical supports. | Model Sensitivity Aware Continual Learning | [
"Zhenyi Wang",
"Heng Huang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=B4q98aAZwt | @inproceedings{
kim2024geneticguided,
title={Genetic-guided {GF}lowNets for Sample Efficient Molecular Optimization},
author={Hyeonah Kim and Minsu Kim and Sanghyeok Choi and Jinkyoo Park},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B4q98aAZwt}
} | The challenge of discovering new molecules with desired properties is crucial in domains like drug discovery and material design. Recent advances in deep learning-based generative methods have shown promise but face the issue of sample efficiency due to the computational expense of evaluating the reward function. This paper proposes a novel algorithm for sample-efficient molecular optimization by distilling a powerful genetic algorithm into deep generative policy using GFlowNets training, the off-policy method for amortized inference. This approach enables the deep generative policy to learn from domain knowledge, which has been explicitly integrated into the genetic algorithm. Our method achieves state-of-the-art performance in the official molecular optimization benchmark, significantly outperforming previous methods. It also demonstrates effectiveness in designing inhibitors against SARS-CoV-2 with substantially fewer reward calls. | Genetic-guided GFlowNets for Sample Efficient Molecular Optimization | [
"Hyeonah Kim",
"Minsu Kim",
"Sanghyeok Choi",
"Jinkyoo Park"
] | NeurIPS.cc/2024/Conference | 2402.05961 | [
"https://github.com/hyeonahkimm/genetic_gfn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=B4k2TecKT2 | @inproceedings{
zhang2024towards,
title={Towards Accurate and Fair Cognitive Diagnosis via Monotonic Data Augmentation},
author={Zheng Zhang and Wei Song and Qi Liu and Qingyang Mao and Yiyan Wang and Weibo Gao and Zhenya Huang and Shijin Wang and Enhong Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B4k2TecKT2}
} | Intelligent education stands as a prominent application of machine learning. Within this domain, cognitive diagnosis (CD) is a key research focus that aims to diagnose students' proficiency levels in specific knowledge concepts. As a crucial task within the field of education, cognitive diagnosis encompasses two fundamental requirements: accuracy and fairness. Existing studies have achieved significant success by primarily utilizing observed historical logs of student-exercise interactions. However, real-world scenarios often present a challenge, where a substantial number of students engage with a limited number of exercises. This data sparsity issue can lead to both inaccurate and unfair diagnoses. To this end, we introduce a monotonic data augmentation framework, CMCD, to tackle the data sparsity issue and thereby achieve accurate and fair CD results. Specifically, CMCD integrates the monotonicity assumption, a fundamental educational principle in CD, to establish two constraints for data augmentation. These constraints are general and can be applied to the majority of CD backbones. Furthermore, we provide theoretical analysis to guarantee the accuracy and convergence speed of CMCD. Finally, extensive experiments on real-world datasets showcase the efficacy of our framework in addressing the data sparsity issue with accurate and fair CD results. | Towards Accurate and Fair Cognitive Diagnosis via Monotonic Data Augmentation | [
"Zheng Zhang",
"Wei Song",
"Qi Liu",
"Qingyang Mao",
"Yiyan Wang",
"Weibo Gao",
"Zhenya Huang",
"Shijin Wang",
"Enhong Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=B3rZZRALhk | @inproceedings{
berrada2024on,
title={On improved Conditioning Mechanisms and Pre-training Strategies for Diffusion Models},
author={Tariq Berrada and Pietro Astolfi and Melissa Hall and Reyhane Askari Hemmat and Yohann Benchetrit and Marton Havasi and Matthew J. Muckley and Karteek Alahari and Adriana Romero-Soriano and Jakob Verbeek and Michal Drozdzal},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B3rZZRALhk}
} | Large-scale training of latent diffusion models (LDMs) has enabled unprecedented quality in image generation.
However, large-scale end-to-end training of these models is computationally costly, and hence most research focuses either on finetuning pretrained models or experiments at smaller scales.
In this work we aim to improve the training efficiency and performance of LDMs with the goal of scaling to larger datasets and higher resolutions.
We focus our study on two points that are critical for good performance and efficient training:
(i) the mechanisms used for semantic level (\eg a text prompt, or class name) and low-level (crop size, random flip, \etc) conditioning of the model, and
(ii) pre-training strategies to transfer representations learned on smaller and lower-resolution datasets to larger ones.
The main contributions of our work are the following:
we present systematic experimental study of these points,
we propose a novel conditioning mechanism that disentangles semantic and low-level conditioning,
we obtain state-of-the-art performance on CC12M for text-to-image at 512 resolution. | On improved Conditioning Mechanisms and Pre-training Strategies for Diffusion Models | [
"Tariq Berrada",
"Pietro Astolfi",
"Melissa Hall",
"Reyhane Askari Hemmat",
"Yohann Benchetrit",
"Marton Havasi",
"Matthew J. Muckley",
"Karteek Alahari",
"Adriana Romero-Soriano",
"Jakob Verbeek",
"Michal Drozdzal"
] | NeurIPS.cc/2024/Conference | 2411.03177 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=B2cTLakrhV | @inproceedings{
ban2024differentiable,
title={Differentiable Structure Learning with Partial Orders},
author={Taiyu Ban and Lyuzhou Chen and Xiangyu Wang and Xin Wang and Derui Lyu and Huanhuan Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B2cTLakrhV}
} | Differentiable structure learning is a novel line of causal discovery research that transforms the combinatorial optimization of structural models into a continuous optimization problem. However, the field has lacked feasible methods to integrate partial order constraints, a critical prior information typically used in real-world scenarios, into the differentiable structure learning framework. The main difficulty lies in adapting these constraints, typically suited for the space of total orderings, to the continuous optimization context of structure learning in the graph space. To bridge this gap, this paper formalizes a set of equivalent constraints that map partial orders onto graph spaces and introduces a plug-and-play module for their efficient application. This module preserves the equivalent effect of partial order constraints in the graph space, backed by theoretical validations of correctness and completeness. It significantly enhances the quality of recovered structures while maintaining good efficiency, which learns better structures using 90\% fewer samples than the data-based method on a real-world dataset. This result, together with a comprehensive evaluation on synthetic cases, demonstrates our method's ability to effectively improve differentiable structure learning with partial orders. | Differentiable Structure Learning with Partial Orders | [
"Taiyu Ban",
"Lyuzhou Chen",
"Xiangyu Wang",
"Xin Wang",
"Derui Lyu",
"Huanhuan Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=B29BlRe26Z | @inproceedings{
dahan2024slowcalsgd,
title={{SL}owcal{SGD} : Slow Query Points Improve Local-{SGD} for Stochastic Convex Optimization},
author={Tehila Dahan and Kfir Yehuda Levy},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B29BlRe26Z}
} | We consider distributed learning scenarios where $M$ machines interact with a parameter server along several communication rounds in order to minimize a joint objective function.
Focusing on the heterogeneous case, where different machines may draw samples from different data-distributions, we design the first local update method that provably benefits over the two most prominent distributed baselines: namely Minibatch-SGD and Local-SGD.
Key to our approach is a slow querying technique that we customize to the distributed setting, which in turn enables a better mitigation of the bias caused by local updates. | SLowcalSGD : Slow Query Points Improve Local-SGD for Stochastic Convex Optimization | [
"Tehila Dahan",
"Kfir Yehuda Levy"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=B1vGiSgELw | @inproceedings{
hu2024matryoshka,
title={Matryoshka Query Transformer for Large Vision-Language Models},
author={Wenbo Hu and Zi-Yi Dou and Liunian Harold Li and Amita Kamath and Nanyun Peng and Kai-Wei Chang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B1vGiSgELw}
} | Large Vision-Language Models (LVLMs) typically encode an image into a fixed number of visual tokens (e.g., 576) and process these tokens with a language model. Despite their strong performance, LVLMs face challenges in adapting to varying computational constraints. This raises the question: can we achieve flexibility in the number of visual tokens to suit different tasks and computational resources? We answer this with an emphatic yes. Inspired by Matryoshka Representation Learning, we introduce the Matryoshka Query Transformer (MQT), capable of encoding an image into $m$ visual tokens during inference, where $m$ can be any number up to a predefined maximum. This is achieved by employing a query transformer with $M$ latent query tokens to compress the visual embeddings. During each training step, we randomly select $m \leq M$ latent query tokens and train the model using only these first $m$ tokens, discarding the rest.
Combining MQT with LLaVA, we train a single model once, and flexibly and drastically reduce the number of inference-time visual tokens while maintaining similar or better performance compared to training independent models for each number of tokens.
Our model, MQT-LLaVA, matches LLaVA-1.5 performance across 11 benchmarks using a maximum of 256 tokens instead of LLaVA’s fixed 576. Reducing to 16 tokens (8x less TFLOPs) only sacrifices the performance by 2.4 points on MMBench. On certain tasks such as ScienceQA and MMMU, we can even go down to only 2 visual tokens with performance drops of just 3\% and 6\% each.
Our exploration of the trade-off between the accuracy and computational cost brought about by the number of visual tokens facilitates future research to achieve the best of both worlds. | Matryoshka Query Transformer for Large Vision-Language Models | [
"Wenbo Hu",
"Zi-Yi Dou",
"Liunian Harold Li",
"Amita Kamath",
"Nanyun Peng",
"Kai-Wei Chang"
] | NeurIPS.cc/2024/Conference | 2405.19315 | [
"https://github.com/gordonhu608/mqt-llava"
] | https://huggingface.co/papers/2405.19315 | 1 | 0 | 0 | 6 | [
"gordonhu/MQT-LLaVA-7b"
] | [] | [
"gordonhu/MQT-LLaVA"
] | [
"gordonhu/MQT-LLaVA-7b"
] | [] | [
"gordonhu/MQT-LLaVA"
] | 1 | poster |
null | https://openreview.net/forum?id=B1Iq1EOiVU | @inproceedings{
luo2024deformabletst,
title={Deformable{TST}: Transformer for Time Series Forecasting without Over-reliance on Patching},
author={Donghao Luo and Xue Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B1Iq1EOiVU}
} | With the proposal of patching technique in time series forecasting, Transformerbased models have achieved compelling performance and gained great interest from
the time series community. But at the same time, we observe a new problem that
the recent Transformer-based models are overly reliant on patching to achieve ideal
performance, which limits their applicability to some forecasting tasks unsuitable
for patching. In this paper, we intent to handle this emerging issue. Through diving
into the relationship between patching and full attention (the core mechanism
in Transformer-based models), we further find out the reason behind this issue
is that full attention relies overly on the guidance of patching to focus on the
important time points and learn non-trivial temporal representation. Based on this
finding, we propose DeformableTST as an effective solution to this emerging
issue. Specifically, we propose deformable attention, a sparse attention mechanism
that can better focus on the important time points by itself, to get rid of the need of
patching. And we also adopt a hierarchical structure to alleviate the efficiency issue
caused by the removal of patching. Experimentally, our DeformableTST achieves
the consistent state-of-the-art performance in a broader range of time series tasks,
especially achieving promising performance in forecasting tasks unsuitable for
patching, therefore successfully reducing the reliance on patching and broadening
the applicability of Transformer-based models. Code is available at this repository:
https://github.com/luodhhh/DeformableTST. | DeformableTST: Transformer for Time Series Forecasting without Over-reliance on Patching | [
"Donghao Luo",
"Xue Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=B1FOes6cyq | @inproceedings{
jin2024learning,
title={Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate},
author={Can Jin and Tong Che and Hongwu Peng and Yiyuan Li and Dimitris N. Metaxas and Marco Pavone},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B1FOes6cyq}
} | Generalization remains a central challenge in machine learning. In this work, we propose *Learning from Teaching* (**LoT**), a novel regularization technique for deep neural networks to enhance generalization. Inspired by the human ability to capture concise and abstract patterns, we hypothesize that generalizable correlations are expected to be easier to imitate. LoT operationalizes this concept to improve the generalization of the main model with auxiliary student learners. The student learners are trained by the main model and, in turn, provide feedback to help the main model capture more generalizable and imitable correlations. Our experimental results across several domains, including Computer Vision, Natural Language Processing, and methodologies like Reinforcement Learning, demonstrate that the introduction of LoT brings significant benefits compared to training models on the original dataset. The results suggest the effectiveness and efficiency of LoT in identifying generalizable information at the right scales while discarding spurious data correlations, thus making LoT a valuable addition to current machine learning. Code is available at https://github.com/jincan333/LoT. | Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate | [
"Can Jin",
"Tong Che",
"Hongwu Peng",
"Yiyuan Li",
"Dimitris N. Metaxas",
"Marco Pavone"
] | NeurIPS.cc/2024/Conference | 2402.02769 | [
"https://github.com/jincan333/lot"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=B0OWOkMwhz | @inproceedings{
chen2024mvsplat,
title={{MVS}plat360: Feed-Forward 360 Scene Synthesis from Sparse Views},
author={Yuedong Chen and Chuanxia Zheng and Haofei Xu and Bohan Zhuang and Andrea Vedaldi and Tat-Jen Cham and Jianfei Cai},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=B0OWOkMwhz}
} | We introduce MVSplat360, a feed-forward approach for 360° novel view synthesis (NVS) of diverse real-world scenes, using only sparse observations. This setting is inherently ill-posed due to minimal overlap among input views and insufficient visual information provided, making it challenging for conventional methods to achieve high-quality results. Our MVSplat360 addresses this by effectively combining geometry-aware 3D reconstruction with temporally consistent video generation. Specifically, it refactors a feed-forward 3D Gaussian Splatting (3DGS) model to render features directly into the latent space of a pre-trained Stable Video Diffusion (SVD) model, where these features then act as pose and visual cues to guide the denoising process and produce photorealistic 3D-consistent views. Our model is end-to-end trainable and supports rendering arbitrary views with as few as 5 sparse input views. To evaluate MVSplat360's performance, we introduce a new benchmark using the challenging DL3DV-10K dataset, where MVSplat360 achieves superior visual quality compared to state-of-the-art methods on wide-sweeping or even 360° NVS tasks. Experiments on the existing benchmark RealEstate10K also confirm the effectiveness of our model. Readers are highly recommended to view the video results at [donydchen.github.io/mvsplat360](https://donydchen.github.io/mvsplat360). | MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views | [
"Yuedong Chen",
"Chuanxia Zheng",
"Haofei Xu",
"Bohan Zhuang",
"Andrea Vedaldi",
"Tat-Jen Cham",
"Jianfei Cai"
] | NeurIPS.cc/2024/Conference | 2411.04924 | [
"https://github.com/donydchen/mvsplat360"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=AvWB40qXZh | @inproceedings{
cao2024neuma,
title={Neu{MA}: Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics},
author={Junyi Cao and Shanyan Guan and Yanhao Ge and Wei Li and Xiaokang Yang and Chao Ma},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=AvWB40qXZh}
} | While humans effortlessly discern intrinsic dynamics and adapt to new scenarios, modern AI systems often struggle. Current methods for visual grounding of dynamics either use pure neural-network-based simulators (black box), which may violate physical laws, or traditional physical simulators (white box), which rely on expert-defined equations that may not fully capture actual dynamics. We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections, facilitating accurate learning of actual dynamics while maintaining the generalizability and interpretability of physical priors. Additionally, we propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images, allowing back-propagate image gradients to optimize the simulator. Comprehensive experiments on various dynamics in terms of grounded particle accuracy, dynamic rendering quality, and generalization ability demonstrate that NeuMA can accurately capture intrinsic dynamics. Project Page: https://xjay18.github.io/projects/neuma.html. | NeuMA: Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics | [
"Junyi Cao",
"Shanyan Guan",
"Yanhao Ge",
"Wei Li",
"Xiaokang Yang",
"Chao Ma"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=AvBuK8Ezrg | @inproceedings{
wei2024textitneuropath,
title={\${\textbackslash}textit\{NeuroPath\}\$: A Neural Pathway Transformer for Joining the Dots of Human Connectomes},
author={Ziquan Wei and Tingting Dan and Jiaqi Ding and Guorong Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=AvBuK8Ezrg}
} | Although modern imaging technologies allow us to study connectivity between two distinct brain regions $\textit{in-vivo}$, an in-depth understanding of how anatomical structure supports brain function and how spontaneous functional fluctuations emerge remarkable cognition is still elusive. Meanwhile, tremendous efforts have been made in the realm of machine learning to establish the nonlinear mapping between neuroimaging data and phenotypic traits. However, the absence of neuroscience insight in the current approaches poses significant challenges in understanding cognitive behavior from transient neural activities.
To address this challenge, we put the spotlight on the coupling mechanism of structural connectivity (SC) and functional connectivity (FC) by formulating such network neuroscience question into an expressive graph representation learning problem for high-order topology. Specifically, we introduce the concept of $\textit{topological detour}$ to characterize how a ubiquitous instance of FC (direct link) is supported by neural pathways (detour) physically wired by SC, which forms a cyclic loop interacted by brain structure and function. In the clich\'e of machine learning, the multi-hop detour pathway underlying SC-FC coupling allows us to devise a novel multi-head self-attention mechanism within Transformer to capture multi-modal feature representation from paired graphs of SC and FC. Taken together, we propose a biological-inspired deep model, coined as $\textit{NeuroPath}$, to find putative connectomic feature representations from the unprecedented amount of neuroimages, which can be plugged into various downstream applications such as task recognition and disease diagnosis.
We have evaluated $\textit{NeuroPath}$ on large-scale public datasets including Human Connectome Project (HCP) and UK Biobank (UKB) under different experiment settings of supervised and zero-shot learning, where the state-of-the-art performance by our $\textit{NeuroPath}$ indicates great potential in network neuroscience. | NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes | [
"Ziquan Wei",
"Tingting Dan",
"Jiaqi Ding",
"Guorong Wu"
] | NeurIPS.cc/2024/Conference | 2409.17510 | [
"https://github.com/Chrisa142857/neuro_detour"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.