bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=atDcnWqG5n | @inproceedings{
ahvonen2024logical,
title={Logical characterizations of recurrent graph neural networks with reals and floats},
author={Veeti Ahvonen and Damian Heiman and Antti Kuusisto and Carsten Lutz},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=atDcnWqG5n}
} | In pioneering work from 2019, Barceló and coauthors identified logics that precisely match the expressive power of constant iteration-depth graph neural networks (GNNs) relative to properties definable in first-order logic. In this article, we give exact logical characterizations of recurrent GNNs in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. For floats, the formalism matching recurrent GNNs is a rule-based modal logic with counting, while for reals we use a suitable infinitary modal logic, also with counting. These results give exact matches between logics and GNNs in the recurrent setting without relativising to a background logic in either case, but using some natural assumptions about floating-point arithmetic. Applying our characterizations, we also prove that, relative to graph properties definable in monadic second-order logic (MSO), our infinitary and rule-based logics are equally expressive. This implies that recurrent GNNs with reals and floats have the same expressive power over MSO-definable properties and shows that, for such properties, also recurrent GNNs with reals are characterized by a (finitary!) rule-based modal logic. In the general case, in contrast, the expressive power with floats is weaker than with reals. In addition to logic-oriented results, we also characterize recurrent GNNs, with both reals and floats, via distributed automata, drawing links to distributed computing models. | Logical characterizations of recurrent graph neural networks with reals and floats | [
"Veeti Ahvonen",
"Damian Heiman",
"Antti Kuusisto",
"Carsten Lutz"
] | NeurIPS.cc/2024/Conference | 2405.14606 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=asYYSzL4N5 | @inproceedings{
xu2024ban,
title={{BAN}: Detecting Backdoors Activated by Neuron Noise},
author={xiaoyun xu and Zhuoran Liu and Stefanos Koffas and Shujian Yu and Stjepan Picek},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=asYYSzL4N5}
} | Backdoor attacks on deep learning represent a recent threat that has gained significant attention in the research community.
Backdoor defenses are mainly based on backdoor inversion, which has been shown to be generic, model-agnostic, and applicable to practical threat scenarios. State-of-the-art backdoor inversion recovers a mask in the feature space to locate prominent backdoor features, where benign and backdoor features can be disentangled. However, it suffers from high computational overhead, and we also find that it overly relies on prominent backdoor features that are highly distinguishable from benign features. To tackle these shortcomings, this paper improves backdoor feature inversion for backdoor detection by incorporating extra neuron activation information. In particular, we adversarially increase the loss of backdoored models with respect to weights to activate the backdoor effect, based on which we can easily differentiate backdoored and clean models. Experimental results demonstrate our defense, BAN, is 1.37$\times$ (on CIFAR-10) and 5.11$\times$ (on ImageNet200) more efficient with an average 9.99\% higher detect success rate than the state-of-the-art defense BTI DBF. Our code and trained models are publicly available at https://github.com/xiaoyunxxy/ban. | BAN: Detecting Backdoors Activated by Neuron Noise | [
"xiaoyun xu",
"Zhuoran Liu",
"Stefanos Koffas",
"Shujian Yu",
"Stjepan Picek"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=arHJlYiY2J | @inproceedings{
kuang2024collaborative,
title={Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control},
author={Zhengfei Kuang and Shengqu Cai and Hao He and Yinghao Xu and Hongsheng Li and Leonidas Guibas and Gordon Wetzstein},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=arHJlYiY2J}
} | Research on video generation has recently made tremendous progress, enabling high-quality videos to be generated from text prompts or images. Adding control to the video generation process is an important goal moving forward and recent approaches that condition video generation models on camera trajectories take an important step towards this goal. Yet, it remains challenging to generate a video of the same scene from multiple different camera trajectories. Solutions to this multi-video generation problem could enable large-scale 3D scene generation with editable camera trajectories, among other applications. We introduce collaborative video diffusion (CVD) as an important step towards this vision. The CVD framework includes a novel cross-video synchronization module that promotes consistency between corresponding frames of the same video rendered from different camera poses using an epipolar attention mechanism. Trained on top of a state-of-the-art camera-control module for video generation, CVD generates multiple videos rendered from different camera trajectories with significantly better consistency than baselines, as shown in extensive experiments. | Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control | [
"Zhengfei Kuang",
"Shengqu Cai",
"Hao He",
"Yinghao Xu",
"Hongsheng Li",
"Leonidas Guibas",
"Gordon Wetzstein"
] | NeurIPS.cc/2024/Conference | 2405.17414 | [
""
] | https://huggingface.co/papers/2405.17414 | 2 | 10 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aq3I5B6GLG | @inproceedings{
wiltzer2024foundations,
title={Foundations of Multivariate Distributional Reinforcement Learning},
author={Harley Wiltzer and Jesse Farebrother and Arthur Gretton and Mark Rowland},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aq3I5B6GLG}
} | In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably convergent multivariate *distributional* dynamic programming and temporal difference learning. Our convergence rates match the familiar rates in the scalar reward setting, and additionally provide new insights into the fidelity of approximate return distribution representations as a function of the reward dimension. Surprisingly, when the reward dimension is larger than $1$, we show that standard analysis of categorical TD learning fails, which we resolve with a novel projection onto the space of mass-$1$ signed measures. Finally, with the aid of our technical results and simulations, we identify tradeoffs between distribution representations that influence the performance of multivariate distributional RL in practice. | Foundations of Multivariate Distributional Reinforcement Learning | [
"Harley Wiltzer",
"Jesse Farebrother",
"Arthur Gretton",
"Mark Rowland"
] | NeurIPS.cc/2024/Conference | 2409.00328 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=apPHMfE63y | @inproceedings{
buening2024strategic,
title={Strategic Linear Contextual Bandits},
author={Thomas Kleine Buening and Aadirupa Saha and Christos Dimitrakakis and Haifeng Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=apPHMfE63y}
} | Motivated by the phenomenon of strategic agents gaming a recommender system to maximize the number of times they are recommended to users, we study a strategic variant of the linear contextual bandit problem, where the arms can strategically misreport privately observed contexts to the learner. We treat the algorithm design problem as one of *mechanism design* under uncertainty and propose the Optimistic Grim Trigger Mechanism (OptGTM) that incentivizes the agents (i.e., arms) to report their contexts truthfully while simultaneously minimizing regret. We also show that failing to account for the strategic nature of the agents results in linear regret. However, a trade-off between mechanism design and regret minimization appears to be unavoidable. More broadly, this work aims to provide insight into the intersection of online learning and mechanism design. | Strategic Linear Contextual Bandits | [
"Thomas Kleine Buening",
"Aadirupa Saha",
"Christos Dimitrakakis",
"Haifeng Xu"
] | NeurIPS.cc/2024/Conference | 2406.00551 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aou5yrBqKy | @inproceedings{
zhao2024tabpedia,
title={TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy},
author={Weichao Zhao and Hao Feng and Qi Liu and Jingqun Tang and Binghong Wu and Lei Liao and Shu Wei and Yongjie Ye and Hao Liu and Wengang Zhou and Houqiang Li and Can Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aou5yrBqKy}
} | Tables contain factual and quantitative data accompanied by various structures and contents that pose challenges for machine comprehension. Previous methods generally design task-specific architectures and objectives for individual tasks, resulting in modal isolation and intricate workflows. In this paper, we present a novel large vision-language model, TabPedia, equipped with a concept synergy mechanism. In this mechanism, all the involved diverse visual table understanding (VTU) tasks and multi-source visual embeddings are abstracted as concepts. This unified framework allows TabPedia to seamlessly integrate VTU tasks, such as table detection, table structure recognition, table querying, and table question answering, by leveraging the capabilities of large language models (LLMs). Moreover, the concept synergy mechanism enables table perception-related and comprehension-related tasks to work in harmony, as they can effectively leverage the needed clues from the corresponding source perception embeddings. Furthermore, to better evaluate the VTU task in real-world scenarios, we establish a new and comprehensive table VQA benchmark, ComTQA, featuring approximately 9,000 QA pairs. Extensive quantitative and qualitative experiments on both table perception and comprehension tasks, conducted across various public benchmarks, validate the effectiveness of our TabPedia. The superior performance further confirms the feasibility of using LLMs for understanding visual tables when all concepts work in synergy. The benchmark ComTQA has been open-sourced at https://huggingface.co/datasets/ByteDance/ComTQA. The source code and model also have been released at https://github.com/zhaowc-ustc/TabPedia. | TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy | [
"Weichao Zhao",
"Hao Feng",
"Qi Liu",
"Jingqun Tang",
"Binghong Wu",
"Lei Liao",
"Shu Wei",
"Yongjie Ye",
"Hao Liu",
"Wengang Zhou",
"Houqiang Li",
"Can Huang"
] | NeurIPS.cc/2024/Conference | 2406.01326 | [
"https://github.com/zhaowc-ustc/tabpedia"
] | https://huggingface.co/papers/2406.01326 | 0 | 0 | 0 | 11 | [
"Zhaowc/TabPedia_v1.0"
] | [] | [] | [
"Zhaowc/TabPedia_v1.0"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aon7bwYBiq | @inproceedings{
wei2024differentially,
title={Differentially Private Graph Diffusion with Applications in Personalized PageRanks},
author={Rongzhe Wei and Eli Chien and Pan Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aon7bwYBiq}
} | Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. However, protecting the privacy of graph data is challenging due to its interconnected nature.
This work proposes a novel graph diffusion framework with edge-level different privacy guarantees by using noisy diffusion iterates.
The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications.
We also introduce a novel $\infty$-Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI more applicable in practice.
We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions. | Differentially Private Graph Diffusion with Applications in Personalized PageRanks | [
"Rongzhe Wei",
"Eli Chien",
"Pan Li"
] | NeurIPS.cc/2024/Conference | 2407.00077 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=anyZgGLQ6n | @inproceedings{
mao2024offline,
title={Offline Reinforcement Learning with {OOD} State Correction and {OOD} Action Suppression},
author={Yixiu Mao and Cheems Wang and Chen Chen and Yun Qu and Xiangyang Ji},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=anyZgGLQ6n}
} | In offline reinforcement learning (RL), addressing the out-of-distribution (OOD) action issue has been a focus, but we argue that there exists an OOD state issue that also impairs performance yet has been underexplored. Such an issue describes the scenario when the agent encounters states out of the offline dataset during the test phase, leading to uncontrolled behavior and performance degradation. To this end, we propose SCAS, a simple yet effective approach that unifies OOD state correction and OOD action suppression in offline RL. Technically, SCAS achieves value-aware OOD state correction, capable of correcting the agent from OOD states to high-value in-distribution states. Theoretical and empirical results show that SCAS also exhibits the effect of suppressing OOD actions. On standard offline RL benchmarks, SCAS achieves excellent performance without additional hyperparameter tuning. Moreover, benefiting from its OOD state correction feature, SCAS demonstrates enhanced robustness against environmental perturbations. | Offline Reinforcement Learning with OOD State Correction and OOD Action Suppression | [
"Yixiu Mao",
"Cheems Wang",
"Chen Chen",
"Yun Qu",
"Xiangyang Ji"
] | NeurIPS.cc/2024/Conference | 2410.19400 | [
"https://github.com/MAOYIXIU/SCAS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=amJyuVqSaf | @inproceedings{
cabezas2024markovian,
title={Markovian Flow Matching: Accelerating {MCMC} with Continuous Normalizing Flows},
author={Alberto Cabezas and Louis Sharrock and Christopher Nemeth},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=amJyuVqSaf}
} | Continuous normalizing flows (CNFs) learn the probability path between a reference distribution and a target distribution by modeling the vector field generating said path using neural networks. Recently, Lipman et al. (2022) introduced a simple and inexpensive method for training CNFs in generative modeling, termed flow matching (FM). In this paper, we repurpose this method for probabilistic inference by incorporating Markovian sampling methods in evaluating the FM objective, and using the learned CNF to improve Monte Carlo sampling. Specifically, we propose an adaptive Markov chain Monte Carlo (MCMC) algorithm, which combines a local Markov transition kernel with a non-local, flow-informed transition kernel, defined using a CNF. This CNF is adapted on-the-fly using samples from the Markov chain, which are used to specify the probability path for the FM objective. Our method also includes an adaptive tempering mechanism that allows the discovery of multiple modes in the target distribution. Under mild assumptions, we establish convergence of our method to a local optimum of the FM objective. We then benchmark our approach on several synthetic and real-world examples, achieving similar performance to other state-of-the-art methods but often at a significantly lower computational cost. | Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing Flows | [
"Alberto Cabezas",
"Louis Sharrock",
"Christopher Nemeth"
] | NeurIPS.cc/2024/Conference | 2405.14392 | [
"https://github.com/albcab/mfm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ahvOhPkkMx | @inproceedings{
chen2024zipper,
title={Zipper: Addressing Degeneracy in Algorithm-Agnostic Inference},
author={Geng Chen and Yinxu Jia and Guanghui Wang and Changliang Zou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ahvOhPkkMx}
} | The widespread use of black box prediction methods has sparked an increasing interest in algorithm/model-agnostic approaches for quantifying goodness-of-fit, with direct ties to specification testing, model selection and variable importance assessment. A commonly used framework involves defining a predictiveness criterion, applying a cross-fitting procedure to estimate the predictiveness, and utilizing the difference in estimated predictiveness between two models as the test statistic. However, even after standardization, the test statistic typically fails to converge to a non-degenerate distribution under the null hypothesis of equal goodness, leading to what is known as the degeneracy issue. To addresses this degeneracy issue, we present a simple yet effective device, Zipper. It draws inspiration from the strategy of additional splitting of testing data, but encourages an overlap between two testing data splits in predictiveness evaluation. Zipper binds together the two overlapping splits using a slider parameter that controls the proportion of overlap. Our proposed test statistic follows an asymptotically normal distribution under the null hypothesis for any fixed slider value, guaranteeing valid size control while enhancing power by effective data reuse. Finite-sample experiments demonstrate that our procedure, with a simple choice of the slider, works well across a wide range of settings. | Zipper: Addressing Degeneracy in Algorithm-Agnostic Inference | [
"Geng Chen",
"Yinxu Jia",
"Guanghui Wang",
"Changliang Zou"
] | NeurIPS.cc/2024/Conference | 2306.16852 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=ag7piyoyut | @inproceedings{
dao2024incorporating,
title={Incorporating Surrogate Gradient Norm to Improve Offline Optimization Techniques},
author={Manh Cuong Dao and Phi Le Nguyen and Thao Nguyen Truong and Trong Nghia Hoang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ag7piyoyut}
} | Offline optimization has recently emerged as an increasingly popular approach to mitigate the prohibitively expensive cost of online experimentation. The key idea is to learn a surrogate of the black-box function that underlines the target experiment using a static (offline) dataset of its previous input-output queries. Such an approach is, however, fraught with an out-of-distribution issue where the learned surrogate becomes inaccurate outside the offline data regimes. To mitigate this, existing offline optimizers have proposed numerous conditioning techniques to prevent the learned surrogate from being too erratic. Nonetheless, such conditioning strategies are often specific to particular surrogate or search models, which might not generalize to a different model choice. This motivates us to develop a model-agnostic approach instead, which incorporates a notion of model sharpness into the training loss of the surrogate as a regularizer. Our approach is supported by a new theoretical analysis demonstrating that reducing surrogate sharpness on the offline dataset provably reduces its generalized sharpness on unseen data. Our analysis extends existing theories from bounding generalized prediction loss (on unseen data) with loss sharpness to bounding the worst-case generalized surrogate sharpness with its empirical estimate on training data, providing a new perspective on sharpness regularization. Our extensive experimentation on a diverse range of optimization tasks also shows that reducing surrogate sharpness often leads to significant improvement, marking (up to) a noticeable 9.6% performance boost. Our code is publicly available at https://github.com/cuong-dm/IGNITE. | Incorporating Surrogate Gradient Norm to Improve Offline Optimization Techniques | [
"Manh Cuong Dao",
"Phi Le Nguyen",
"Thao Nguyen Truong",
"Trong Nghia Hoang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aetbfmCcwg | @inproceedings{
decruyenaere2024debiasing,
title={Debiasing Synthetic Data Generated by Deep Generative Models},
author={Alexander Decruyenaere and Heidelinde Dehaene and Paloma Rabaey and Johan Decruyenaere and Christiaan Polet and Thomas Demeester and Stijn Vansteelandt},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aetbfmCcwg}
} | While synthetic data hold great promise for privacy protection, their statistical analysis poses significant challenges that necessitate innovative solutions. The use of deep generative models (DGMs) for synthetic data generation is known to induce considerable bias and imprecision into synthetic data analyses, compromising their inferential utility as opposed to original data analyses. This bias and uncertainty can be substantial enough to impede statistical convergence rates, even in seemingly straightforward analyses like mean calculation. The standard errors of such estimators then exhibit slower shrinkage with sample size than the typical 1 over root-$n$ rate. This complicates fundamental calculations like p-values and confidence intervals, with no straightforward remedy currently available. In response to these challenges, we propose a new strategy that targets synthetic data created by DGMs for specific data analyses. Drawing insights from debiased and targeted machine learning, our approach accounts for biases, enhances convergence rates, and facilitates the calculation of estimators with easily approximated large sample variances. We exemplify our proposal through a simulation study on toy data and two case studies on real-world data, highlighting the importance of tailoring DGMs for targeted data analysis. This debiasing strategy contributes to advancing the reliability and applicability of synthetic data in statistical inference. | Debiasing Synthetic Data Generated by Deep Generative Models | [
"Alexander Decruyenaere",
"Heidelinde Dehaene",
"Paloma Rabaey",
"Johan Decruyenaere",
"Christiaan Polet",
"Thomas Demeester",
"Stijn Vansteelandt"
] | NeurIPS.cc/2024/Conference | 2411.04216 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aeYNVtTo7o | @inproceedings{
yuan2024cell,
title={Cell ontology guided transcriptome foundation model},
author={Xinyu Yuan and Zhihao Zhan and Zuobai Zhang and Manqi Zhou and Jianan Zhao and Boyu Han and Yue Li and Jian Tang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aeYNVtTo7o}
} | Transcriptome foundation models (TFMs) hold great promises of deciphering the transcriptomic language that dictate diverse cell functions by self-supervised learning on large-scale single-cell gene expression data, and ultimately unraveling the complex mechanisms of human diseases. However, current TFMs treat cells as independent samples and ignore the taxonomic relationships between cell types, which are available in cell ontology graphs. We argue that effectively leveraging this ontology information during the TFM pre-training can improve learning biologically meaningful gene co-expression patterns while preserving TFM as a general purpose foundation model for downstream zero-shot and fine-tuning tasks. To this end, we present **s**ingle **c**ell, **Cell**-**o**ntology guided TFM (scCello). We introduce cell-type coherence loss and ontology alignment loss, which are minimized along with the masked gene expression prediction loss during the pre-training. The novel loss component guide scCello to learn the cell-type-specific representation and the structural relation between cell types from the cell ontology graph, respectively. We pre-trained scCello on 22 million cells from CellxGene database leveraging their cell-type labels mapped to the cell ontology graph from Open Biological and Biomedical Ontology Foundry. Our TFM demonstrates competitive generalization and transferability performance over the existing TFMs on biologically important tasks including identifying novel cell types of unseen cells, prediction of cell-type-specific marker genes, and cancer drug responses. Source code and model
weights are available at https://github.com/DeepGraphLearning/scCello. | Cell ontology guided transcriptome foundation model | [
"Xinyu Yuan",
"Zhihao Zhan",
"Zuobai Zhang",
"Manqi Zhou",
"Jianan Zhao",
"Boyu Han",
"Yue Li",
"Jian Tang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=aeGSA8UoXF | @inproceedings{
yang2024symmetryinformed,
title={Symmetry-Informed Governing Equation Discovery},
author={Jianke Yang and Wang Rao and Nima Dehmamy and Robin Walters and Rose Yu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aeGSA8UoXF}
} | Despite the advancements in learning governing differential equations from observations of dynamical systems, data-driven methods are often unaware of fundamental physical laws, such as frame invariance. As a result, these algorithms may search an unnecessarily large space and discover less accurate or overly complex equations. In this paper, we propose to leverage symmetry in automated equation discovery to compress the equation search space and improve the accuracy and simplicity of the learned equations. Specifically, we derive equivariance constraints from the time-independent symmetries of ODEs. Depending on the types of symmetries, we develop a pipeline for incorporating symmetry constraints into various equation discovery algorithms, including sparse regression and genetic programming. In experiments across diverse dynamical systems, our approach demonstrates better robustness against noise and recovers governing equations with significantly higher probability than baselines without symmetry. | Symmetry-Informed Governing Equation Discovery | [
"Jianke Yang",
"Wang Rao",
"Nima Dehmamy",
"Robin Walters",
"Rose Yu"
] | NeurIPS.cc/2024/Conference | 2405.16756 | [
"https://github.com/rose-stl-lab/symmetry-ode-discovery"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=abuQMKDVkW | @inproceedings{
li2024sardetk,
title={{SARD}et-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale {SAR} Object Detection},
author={Yuxuan Li and Xiang Li and Weijie Li and Qibin Hou and Li Liu and Ming-Ming Cheng and Jian Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=abuQMKDVkW}
} | Synthetic Aperture Radar (SAR) object detection has gained significant attention recently due to its irreplaceable all-weather imaging capabilities. However, this research field suffers from both limited public datasets (mostly comprising <2K images with only mono-category objects) and inaccessible source code. To tackle these challenges, we establish a new benchmark dataset and an open-source method for large-scale SAR object detection. Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets, providing a large-scale and diverse dataset for research purposes. To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created. With this high-quality dataset, we conducted comprehensive experiments and uncovered a crucial challenge in SAR object detection: the substantial disparities between the pretraining on RGB datasets and finetuning on SAR datasets in terms of both data domain and model structure. To bridge these gaps, we propose a novel Multi-Stage with Filter Augmentation (MSFA) pretraining framework that tackles the problems from the perspective of data input, domain transition, and model migration. The proposed MSFA method significantly enhances the performance of SAR object detection models while demonstrating exceptional generalizability and flexibility across diverse models. This work aims to pave the way for further advancements in SAR object detection. The dataset and code is available at \url{https://github.com/zcablii/SARDet_100K}. | SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection | [
"Yuxuan Li",
"Xiang Li",
"Weijie Li",
"Qibin Hou",
"Li Liu",
"Ming-Ming Cheng",
"Jian Yang"
] | NeurIPS.cc/2024/Conference | 2403.06534 | [
"https://github.com/zcablii/sardet_100k"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=aaUVnpQvbZ | @inproceedings{
klein2024learning,
title={Learning Elastic Costs to Shape Monge Displacements},
author={Michal Klein and Aram-Alexandre Pooladian and Pierre Ablin and Eugene Ndiaye and Jonathan Niles-Weed and marco cuturi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aaUVnpQvbZ}
} | Given a source and a target probability measure, the Monge problem studies efficient ways to map the former onto the latter.
This efficiency is quantified by defining a *cost* function between source and target data.
Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, $\ell^2\_2(\mathbf{x},\mathbf{y}):=\tfrac12\|\mathbf{x}-\mathbf{y}\|\_2^2$.
The benefits of using *elastic* costs, defined using a regularizer $\tau$ as $c(\mathbf{x},\mathbf{y}):=\ell^2_2(\mathbf{x},\mathbf{y})+\tau(\mathbf{x}-\mathbf{y})$, was recently highlighted in (Cuturi et al. 2023). Such costs shape the *displacements* of Monge maps $T$, namely the difference between a source point and its image $T(\mathbf{x})-\mathbf{x}$, by giving them a structure that matches that of the proximal operator of $\tau$.
In this work, we make two important contributions to the study of elastic costs:*(i)* For any elastic cost, we propose a numerical method to compute Monge maps that are provably optimal. This provides a much-needed routine to create synthetic problems where the ground-truth OT map is known, by analogy to the Brenier theorem, which states that the gradient of any convex potential is always a valid Monge map for the $\ell_2^2$ cost; *(ii)* We propose a loss to *learn* the parameter $\theta$ of a parameterized regularizer $\tau_\theta$, and apply it in the case where $\tau_{A}({\bf z}):=\|A^\perp {\bf z}\|^2_2$. This regularizer promotes displacements that lie on a low-dimensional subspace of $\mathbb{R}^d$, spanned by the $p$ rows of $A\in\mathbb{R}^{p\times d}$. We illustrate the soundness of our procedure on synthetic data, generated using our first contribution, in which we show near-perfect recovery of $A$'s subspace using only samples. We demonstrate the applicability of this method by showing predictive improvements on single-cell data tasks. | Learning Elastic Costs to Shape Monge Displacements | [
"Michal Klein",
"Aram-Alexandre Pooladian",
"Pierre Ablin",
"Eugene Ndiaye",
"Jonathan Niles-Weed",
"marco cuturi"
] | NeurIPS.cc/2024/Conference | 2306.11895 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aYqTwcDlCG | @inproceedings{
duan2024learning,
title={Learning World Models for Unconstrained Goal Navigation},
author={Yuanlin Duan and Wensen Mao and He Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aYqTwcDlCG}
} | Learning world models offers a promising avenue for goal-conditioned reinforcement learning with sparse rewards. By allowing agents to plan actions or exploratory goals without direct interaction with the environment, world models enhance exploration efficiency. The quality of a world model hinges on the richness of data stored in the agent's replay buffer, with expectations of reasonable generalization across the state space surrounding recorded trajectories. However, challenges arise in generalizing learned world models to state transitions backward along recorded trajectories or between states across different trajectories, hindering their ability to accurately model real-world dynamics. To address these challenges, we introduce a novel goal-directed exploration algorithm, MUN (short for "World Models for Unconstrained Goal Navigation"). This algorithm is capable of modeling state transitions between arbitrary subgoal states in the replay buffer, thereby facilitating the learning of policies to navigate between any "key" states. Experimental results demonstrate that MUN strengthens the reliability of world models and significantly improves the policy's capacity to generalize across new goal settings. | Learning World Models for Unconstrained Goal Navigation | [
"Yuanlin Duan",
"Wensen Mao",
"He Zhu"
] | NeurIPS.cc/2024/Conference | 2411.02446 | [
"https://github.com/RU-Automated-Reasoning-Group/MUN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aYWtfsf3uP | @inproceedings{
lu2024distributionally,
title={Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithms},
author={Miao Lu and Han Zhong and Tong Zhang and Jose Blanchet},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aYWtfsf3uP}
} | The sim-to-real gap, which represents the disparity between training and testing environments, poses a significant challenge in reinforcement learning (RL). A promising approach to addressing this challenge is distributionally robust RL, often framed as a robust Markov decision process (RMDP). In this framework, the objective is to find a robust policy that achieves good performance under the worst-case scenario among all environments within a pre-specified uncertainty set centered around the training environment. Unlike previous work, which relies on a generative model or a pre-collected offline dataset enjoying good coverage of the deployment environment, we tackle robust RL via interactive data collection, where the learner interacts with the training environment only and refines the policy through trial and error. In this robust RL paradigm, two main challenges emerge: managing distributional robustness while striking a balance between exploration and exploitation during data collection. Initially, we establish that sample-efficient learning without additional assumptions is unattainable owing to the curse of support shift; i.e., the potential disjointedness of the distributional supports between the training and testing environments. To circumvent such a hardness result, we introduce the vanishing minimal value assumption to RMDPs with a total-variation (TV) distance robust set, postulating that the minimal value of the optimal robust value function is zero. We prove that such an assumption effectively eliminates the support shift issue for RMDPs with a TV distance robust set, and present an algorithm with a provable sample complexity guarantee. Our work makes the initial step to uncovering the inherent difficulty of robust RL via interactive data collection and sufficient conditions for designing a sample-efficient algorithm accompanied by sharp sample complexity analysis. | Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithms | [
"Miao Lu",
"Han Zhong",
"Tong Zhang",
"Jose Blanchet"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aXYL24yhjN | @inproceedings{
plecko2024mind,
title={Mind the Gap: A Causal Perspective on Bias Amplification in Prediction \& Decision-Making},
author={Drago Plecko and Elias Bareinboim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aXYL24yhjN}
} | As society increasingly relies on AI-based tools for decision-making in socially sensitive domains, investigating fairness and equity of such automated systems has become a critical field of inquiry. Most of the literature in fair machine learning focuses on defining and achieving fairness criteria in the context of prediction, while not explicitly focusing on how these predictions may be used later on in the pipeline. For instance, if commonly used criteria, such as independence or sufficiency, are satisfied for a prediction score $S$ used for binary classification, they need not be satisfied after an application of a simple thresholding operation on $S$ (as commonly used in practice).
In this paper, we take an important step to address this issue in numerous statistical and causal notions of fairness. We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation.
We then demonstrate that the marginal difference in the optimal 0/1 predictor $\widehat Y$ between groups, written $P(\hat y \mid x_1) - P(\hat y \mid x_0)$, can be causally decomposed into the influences of $X$ on the $L_2$-optimal prediction score $S$ and the influences of $X$ on the margin complement $M$, along different causal pathways (direct, indirect, spurious). We then show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$. This yields a new decomposition of the disparity in the predictor $\widehat Y$ that allows us to disentangle causal differences inherited from the true outcome $Y$ that exists in the real world vs. those coming from the optimization procedure itself. This observation highlights the need for more regulatory oversight due to the potential for bias amplification, and to address this issue we introduce new notions of weak and strong business necessity, together with an algorithm for assessing whether these notions are satisfied. We apply our method to three real-world datasets and derive new insights on bias amplification in prediction and decision-making. | Mind the Gap: A Causal Perspective on Bias Amplification in Prediction Decision-Making | [
"Drago Plecko",
"Elias Bareinboim"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aXS1pwMa8I | @inproceedings{
hu2024learning,
title={Learning 3D Equivariant Implicit Function with Patch-Level Pose-Invariant Representation},
author={Xin Hu and Xiaole Tang and Ruixuan Yu and Jian Sun},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aXS1pwMa8I}
} | Implicit neural representation gains popularity in modeling the continuous 3D surface for 3D representation and reconstruction. In this work, we are motivated by the fact that the local 3D patches repeatedly appear on 3D shapes/surfaces if the factor of poses is removed. Based on this observation, we propose the 3D patch-level equivariant implicit function (PEIF) based on the 3D patch-level pose-invariant representation, allowing us to reconstruct 3D surfaces by estimating equivariant displacement vector fields for query points. Specifically, our model is based on the pose-normalized query/patch pairs and enhanced by the proposed intrinsic patch geometry representation, modeling the intrinsic 3D patch geometry feature by learnable multi-head memory banks. Extensive experiments show that our model achieves state-of-the-art performance on multiple surface reconstruction datasets, and also exhibits better generalization to crossdataset shapes and robustness to arbitrary rotations. Our code will be available at https://github.com/mathXin112/PEIF.git. | Learning 3D Equivariant Implicit Function with Patch-Level Pose-Invariant Representation | [
"Xin Hu",
"Xiaole Tang",
"Ruixuan Yu",
"Jian Sun"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aXNZG82IzV | @inproceedings{
lyu2024cnca,
title={{CNCA}: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors},
author={Linye Lyu and Jiawei Zhou and Daojing He and YU LI},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aXNZG82IzV}
} | Prior works on physical adversarial camouflage against vehicle detectors mainly focus on the effectiveness and robustness of the attack. The current most successful methods optimize 3D vehicle texture at a pixel level. However, this results in conspicuous and attention-grabbing patterns in the generated camouflage, which humans can easily identify. To address this issue, we propose a Customizable and Natural Camouflage Attack (CNCA) method by leveraging an off-the-shelf pre-trained diffusion model. By sampling the optimal texture image from the diffusion model with a user-specific text prompt, our method can generate natural and customizable adversarial camouflage while maintaining high attack performance. With extensive experiments on the digital and physical worlds and user studies, the results demonstrate that our proposed method can generate significantly more natural-looking camouflage than the state-of-the-art baselines while achieving competitive attack performance. | CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors | [
"Linye Lyu",
"Jiawei Zhou",
"Daojing He",
"YU LI"
] | NeurIPS.cc/2024/Conference | 2409.17963 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aXApeuAYkg | @inproceedings{
lu2024casslr,
title={{CA}-{SSLR}: Condition-Aware Self-Supervised Learning Representation for Generalized Speech Processing},
author={Yen-Ju Lu and Jing Liu and Thomas Thebaud and Laureano Moro-Velazquez and Ariya Rastrow and Najim Dehak and Jesus Villalba},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aXApeuAYkg}
} | We introduce Condition-Aware Self-Supervised Learning Representation (CA-SSLR), a generalist conditioning model broadly applicable to various speech-processing tasks. Compared to standard fine-tuning methods that optimize for downstream models, CA-SSLR integrates language and speaker embeddings from earlier layers, making the SSL model aware of the current language and speaker context.
This approach reduces the reliance on the input audio features while preserving the integrity of the base SSLR. CA-SSLR improves the model’s capabilities and demonstrates its generality on unseen tasks with minimal task-specific tuning. Our method employs linear modulation to dynamically adjust internal representations, enabling fine-grained adaptability without significantly altering the original model behavior. Experiments show that CA-SSLR reduces the number of trainable parameters, mitigates overfitting, and excels in under-resourced and unseen tasks. Specifically, CA-SSLR achieves a 10\% relative reduction in LID errors, a 37\% improvement in ASR CER on the ML-SUPERB benchmark, and a 27\% decrease in SV EER on VoxCeleb-1, demonstrating its effectiveness. | CA-SSLR: Condition-Aware Self-Supervised Learning Representation for Generalized Speech Processing | [
"Yen-Ju Lu",
"Jing Liu",
"Thomas Thebaud",
"Laureano Moro-Velazquez",
"Ariya Rastrow",
"Najim Dehak",
"Jesus Villalba"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aX9z2eT6ul | @inproceedings{
jung2024unified,
title={Unified Covariate Adjustment for Causal Inference},
author={Yonghan Jung and Jin Tian and Elias Bareinboim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aX9z2eT6ul}
} | Causal effect identification and estimation are two crucial tasks in causal inference. Although causal effect identification has been theoretically resolved, many existing estimators only address a subset of scenarios, known as the sequential back-door adjustment (SBD) (Pearl and Robins, 1995) or g-formula (Robins, 1986). Recent efforts for developing general-purpose estimators with broader coverage, incorporating the front-door adjustment (FD) (Pearl, 2000) and more, lack scalability due to the high computational cost of summing over high-dimensional variables. In this paper, we introduce a novel approach that achieves broad coverage of causal estimands beyond the SBD, incorporating various sum-product functionals like the FD, while maintaining scalability -- estimated in polynomial time relative to the number of variables and samples. Specifically, we present the class of UCA for which a scalable and doubly robust estimator is developed.
In particular, we illustrate the expressiveness of UCA for a wide spectrum of causal estimands (e.g., SBD, FD, and more) in causal inference. We then develop an estimator that exhibits computational efficiency and doubly robustness. The scalability and robustness of the proposed framework are verified through simulations. | Unified Covariate Adjustment for Causal Inference | [
"Yonghan Jung",
"Jin Tian",
"Elias Bareinboim"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aVh9KRZdRk | @inproceedings{
he2024learning,
title={Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks},
author={Tianyu He and Darshil Doshi and Aritra Das and Andrey Gromov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aVh9KRZdRk}
} | Large language models can solve tasks that were not present in the training set. This capability is believed to be due to in-context learning and skill composition. In this work, we study the emergence of in-context learning and skill composition in a collection of modular arithmetic tasks. Specifically, we consider a finite collection of linear modular functions $z = a x + b y \text{ mod } p$ labeled by the vector $(a, b) \in \mathbb{Z}_p^2$. We use some of these tasks for pre-training and the rest for out-of-distribution testing. We empirically show that a GPT-style transformer exhibits a transition from in-distribution to out-of-distribution generalization as the number of pre-training tasks increases. We find that the smallest model capable of out-of-distribution generalization requires two transformer blocks, while for deeper models, the out-of-distribution generalization phase is *transient*, necessitating early stopping. Finally, we perform an interpretability study of the pre-trained models, revealing highly structured representations in both attention heads and MLPs; and discuss the learned algorithms. Notably, we find an algorithmic shift in deeper models, as we go from few to many in-context examples. | Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks | [
"Tianyu He",
"Darshil Doshi",
"Aritra Das",
"Andrey Gromov"
] | NeurIPS.cc/2024/Conference | 2406.02550 | [
"https://github.com/ablghtianyi/ICL_Modular_Arithmetic"
] | https://huggingface.co/papers/2406.02550 | 1 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=aVSxwicpAk | @inproceedings{
paquette2024,
title={4+3 Phases of Compute-Optimal Neural Scaling Laws},
author={Elliot Paquette and Courtney Paquette and Lechao Xiao and Jeffrey Pennington},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aVSxwicpAk}
} | We consider the solvable neural scaling model with three parameters: data complexity, target complexity, and model-parameter-count. We use this neural scaling model to derive new predictions about the compute-limited, infinite-data scaling law regime. To train the neural scaling model, we run one-pass stochastic gradient descent on a mean-squared loss. We derive a representation of the loss curves which holds over all iteration counts and improves in accuracy as the model parameter count grows. We then analyze the compute-optimal model-parameter-count, and identify 4 phases (+3 subphases) in the data-complexity/target-complexity phase-plane. The phase boundaries are determined by the relative importance of model capacity, optimizer noise, and embedding of the features. We furthermore derive, with mathematical proof and extensive numerical evidence, the scaling-law exponents in all of these phases, in particular computing the optimal model-parameter-count as a function of floating point operation budget. We include a colab notebook https://tinyurl.com/2saj6bkj, nanoChinchilla, that reproduces some key results of the paper. | 4+3 Phases of Compute-Optimal Neural Scaling Laws | [
"Elliot Paquette",
"Courtney Paquette",
"Lechao Xiao",
"Jeffrey Pennington"
] | NeurIPS.cc/2024/Conference | 2405.15074 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=aVK4JFpegy | @inproceedings{
vafa2024evaluating,
title={Evaluating the World Model Implicit in a Generative Model},
author={Keyon Vafa and Justin Y. Chen and Ashesh Rambachan and Jon Kleinberg and Sendhil Mullainathan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aVK4JFpegy}
} | Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is governed by a deterministic finite automaton. This includes problems as diverse as simple logical reasoning, geographic navigation, game-playing, and chemistry. We propose new evaluation metrics for world model recovery inspired by the classic Myhill-Nerode theorem from language theory. We illustrate their utility in three domains: game playing, logic puzzles, and navigation. In all domains, the generative models we consider do well on existing diagnostics for assessing world models, but our evaluation metrics reveal their world models to be far less coherent than they appear. Such incoherence creates fragility: using a generative model to solve related but subtly different tasks can lead to failures. Building generative models that meaningfully capture the underlying logic of the domains they model would be immensely valuable; our results suggest new ways to assess how close a given model is to that goal. | Evaluating the World Model Implicit in a Generative Model | [
"Keyon Vafa",
"Justin Y. Chen",
"Ashesh Rambachan",
"Jon Kleinberg",
"Sendhil Mullainathan"
] | NeurIPS.cc/2024/Conference | 2406.03689 | [
"https://github.com/keyonvafa/world-model-evaluation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=aUHSwmHRVb | @inproceedings{
klug2024motionttt,
title={Motion{TTT}: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected {MRI}},
author={Tobit Klug and Kun Wang and Stefan Ruschke and Reinhard Heckel},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aUHSwmHRVb}
} | A major challenge of the long measurement times in magnetic resonance imaging (MRI), an important medical imaging technology, is that patients may move during data acquisition. This leads to severe motion artifacts in the reconstructed images and volumes. In this paper, we propose MotionTTT a deep learning-based test-time-training (TTT) method for accurate motion estimation. The key idea is that a neural network trained for motion-free reconstruction has a small loss if there is no motion, thus optimizing over motion parameters passed through the reconstruction network enables accurate estimation of motion. The estimated motion parameters enable to correct for the motion and to reconstruct accurate motion-corrected images. Our method uses 2D reconstruction networks to estimate rigid motion in 3D, and constitutes the first deep learning based method for 3D rigid motion estimation towards 3D-motion-corrected MRI. We show that our method can provably reconstruct motion parameters for a simple signal and neural network model. We demonstrate the effectiveness of our method for both retrospectively simulated motion and prospectively collected real motion-corrupted data. Code is available at \url{https://github.com/MLI-lab/MRI_MotionTTT}. | MotionTTT: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected MRI | [
"Tobit Klug",
"Kun Wang",
"Stefan Ruschke",
"Reinhard Heckel"
] | NeurIPS.cc/2024/Conference | 2409.09370 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aTNT3FuVBG | @inproceedings{
khodak2024suremap,
title={SureMap: Simultaneous mean estimation for single-task and multi-task disaggregated evaluation},
author={Mikhail Khodak and Lester Mackey and Alexandra Chouldechova and Miroslav Dud{\'\i}k},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aTNT3FuVBG}
} | Disaggregated evaluation—estimation of performance of a machine learning model on different subpopulations—is a core task when assessing performance and group-fairness of AI systems.
A key challenge is that evaluation data is scarce, and subpopulations arising from intersections of attributes (e.g., race, sex, age) are often tiny.
Today, it is common for multiple clients to procure the same AI model from a model developer, and the task of disaggregated evaluation is faced by each customer individually. This gives rise to what we call the *multi-task disaggregated evaluation problem*, wherein multiple clients seek to conduct a disaggregated evaluation of a given model in their own data setting (task). In this work we develop a disaggregated evaluation method called **SureMap** that has high estimation accuracy for both multi-task *and* single-task disaggregated evaluations of blackbox models. SureMap's efficiency gains come from
(1) transforming the problem into structured simultaneous Gaussian mean estimation and (2) incorporating external data, e.g., from the AI system creator or from their other clients. Our method combines *maximum a posteriori* (MAP) estimation using a well-chosen prior together with cross-validation-free tuning via Stein's unbiased risk estimate (SURE).
We evaluate SureMap on disaggregated evaluation tasks in multiple domains, observing significant accuracy improvements over several strong competitors. | SureMap: Simultaneous mean estimation for single-task and multi-task disaggregated evaluation | [
"Mikhail Khodak",
"Lester Mackey",
"Alexandra Chouldechova",
"Miroslav Dudík"
] | NeurIPS.cc/2024/Conference | 2411.09730 | [
"https://github.com/mkhodak/SureMap"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aSkckaNxnO | @inproceedings{
xiao2024enhancing,
title={Enhancing Multiple Dimensions of Trustworthiness in {LLM}s via Sparse Activation Control},
author={Yuxin Xiao and Chaoqun Wan and Yonggang Zhang and Wenxiao Wang and Binbin Lin and Xiaofei He and Xu Shen and Jieping Ye},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aSkckaNxnO}
} | As the development and application of Large Language Models (LLMs) continue to advance rapidly, enhancing their trustworthiness and aligning them with human preferences has become a critical area of research. Traditional methods rely heavily on extensive data for Reinforcement Learning from Human Feedback (RLHF), but representation engineering offers a new, training-free approach. This technique leverages semantic features to control the representation of LLM's intermediate hidden states, enabling the model to meet specific requirements such as increased honesty or heightened safety awareness. However, a significant challenge arises when attempting to fulfill multiple requirements simultaneously. It proves difficult to encode various semantic contents, like honesty and safety, into a singular semantic feature, restricting its practicality.
In this work, we address this challenge through Sparse Activation Control. By delving into the intrinsic mechanisms of LLMs, we manage to identify and pinpoint modules that are closely related to specific tasks within the model, i.e. attention heads. These heads display sparse characteristics that allow for near-independent control over different tasks. Our experiments, conducted on the open-source Llama series models, have yielded encouraging results. The models were able to align with human preferences on issues of safety, factualness, and bias concurrently. | Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control | [
"Yuxin Xiao",
"Chaoqun Wan",
"Yonggang Zhang",
"Wenxiao Wang",
"Binbin Lin",
"Xiaofei He",
"Xu Shen",
"Jieping Ye"
] | NeurIPS.cc/2024/Conference | 2411.02461 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aRokfUfIQs | @inproceedings{
taraday2024sequential,
title={Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks},
author={Mitchell Keren Taraday and Almog David and Chaim Baskin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aRokfUfIQs}
} | Message Passing Graph Neural Networks (MPGNNs) have emerged as the preferred method for modeling complex interactions across diverse graph entities. While the theory of such models is well understood, their aggregation module has not received sufficient attention. Sum-based aggregators have solid theoretical foundations regarding their separation capabilities. However, practitioners often prefer using more complex aggregations and mixtures of diverse aggregations. In this work, we unveil a possible explanation for this gap. We claim that sum-based aggregators fail to "mix" features belonging to distinct neighbors, preventing them from succeeding at downstream tasks.
To this end, we introduce Sequential Signal Mixing Aggregation (SSMA), a novel plug-and-play aggregation for MPGNNs. SSMA treats the neighbor features as 2D discrete signals and sequentially convolves them, inherently enhancing the ability to mix features attributed to distinct neighbors. By performing extensive experiments, we show that when combining SSMA with well-established MPGNN architectures, we achieve substantial performance gains across various benchmarks, achieving new state-of-the-art results in many settings.
We published our code at https://almogdavid.github.io/SSMA/. | Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks | [
"Mitchell Keren Taraday",
"Almog David",
"Chaim Baskin"
] | NeurIPS.cc/2024/Conference | 2409.19414 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aRhxruC2bi | @inproceedings{
shin2024towards,
title={Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels},
author={Heeseong Shin and Chaehyun Kim and Sunghwan Hong and Seokju Cho and Anurag Arnab and Paul Hongsuck Seo and Seungryong Kim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aRhxruC2bi}
} | Large-scale vision-language models like CLIP have demonstrated impressive open-vocabulary capabilities for image-level tasks, excelling in recognizing what objects are present. However, they struggle with pixel-level recognition tasks like semantic segmentation, which require understanding where the objects are located. In this work, we propose a novel method, MCLIP, to adapt the CLIP image encoder for pixel-level understanding by guiding the model on where, which is achieved using unlabeled images and masks generated from vision foundation models such as SAM and DINO. To address the challenges of leveraging masks without semantic labels, we devise an online clustering algorithm using learnable class names to acquire general semantic concepts. MCLIP shows significant performance improvements over CLIP and competitive results compared to caption-supervised methods in open-vocabulary semantic segmentation. | Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels | [
"Heeseong Shin",
"Chaehyun Kim",
"Sunghwan Hong",
"Seokju Cho",
"Anurag Arnab",
"Paul Hongsuck Seo",
"Seungryong Kim"
] | NeurIPS.cc/2024/Conference | 2409.19846 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aR9JvkOGjM | @inproceedings{
wan2024improved,
title={Improved Regret for Bandit Convex Optimization with Delayed Feedback},
author={Yuanyu Wan and Chang Yao and Mingli Song and Lijun Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aR9JvkOGjM}
} | We investigate bandit convex optimization (BCO) with delayed feedback, where only the loss value of the action is revealed under an arbitrary delay. Let $n,T,\bar{d}$ denote the dimensionality, time horizon, and average delay, respectively. Previous studies have achieved an $O(\sqrt{n}T^{3/4}+(n\bar{d})^{1/3}T^{2/3})$ regret bound for this problem, whose delay-independent part matches the regret of the classical non-delayed bandit gradient descent algorithm. However, there is a large gap between its delay-dependent part, i.e., $O((n\bar{d})^{1/3}T^{2/3})$, and an existing $\Omega(\sqrt{\bar{d}T})$ lower bound. In this paper, we illustrate that this gap can be filled in the worst case, where $\bar{d}$ is very close to the maximum delay $d$. Specifically, we first develop a novel algorithm, and prove that it enjoys a regret bound of $O(\sqrt{n}T^{3/4}+\sqrt{dT})$ in general. Compared with the previous result, our regret bound is better for $d=O((n\bar{d})^{2/3}T^{1/3})$, and the delay-dependent part is tight in the worst case. The primary idea is to decouple the joint effect of the delays and the bandit feedback on the regret by carefully incorporating the delayed bandit feedback with a blocking update mechanism. Furthermore, we show that the proposed algorithm can improve the regret bound to $O((nT)^{2/3}\log^{1/3}T+d\log T)$ for strongly convex functions. Finally, if the action sets are unconstrained, we demonstrate that it can be simply extended to achieve an $O(n\sqrt{T\log T}+d\log T)$ regret bound for strongly convex and smooth functions. | Improved Regret for Bandit Convex Optimization with Delayed Feedback | [
"Yuanyu Wan",
"Chang Yao",
"Mingli Song",
"Lijun Zhang"
] | NeurIPS.cc/2024/Conference | 2402.09152 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aQv5AbN1wF | @inproceedings{
vankadara2024on,
title={On Feature Learning in Structured State Space Models},
author={Leena Chennuru Vankadara and Jin Xu and Moritz Haas and Volkan Cevher},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aQv5AbN1wF}
} | This paper studies the scaling behavior of state-space models (SSMs) and their structured variants, such as Mamba, that have recently arisen in popularity as alternatives to transformer-based neural network architectures. Specifically, we focus on the capability of SSMs to learn features as their network width approaches infinity. Our findings reveal that established scaling rules, such as the Maximal Update Parameterization, fail to support feature learning as these models cannot be represented in the form of Tensor Programs. Additionally, we demonstrate that spectral scaling conditions, shown to be effective for feature learning in a host of other architectures, do not hold the same implications for SSMs. Through a detailed signal propagation analysis in SSMs, both forward and backward, we identify the appropriate scaling necessary for non-trivial feature evolution in the infinite-width limit. Our proposed scaling shows behavior akin to the Maximal Update Parameterization, such as improved stability, better generalization, and transferability of optimal hyper-parameters from small to large scale SSMs. | On Feature Learning in Structured State Space Models | [
"Leena Chennuru Vankadara",
"Jin Xu",
"Moritz Haas",
"Volkan Cevher"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aNTnHBkw4T | @inproceedings{
aithal2024understanding,
title={Understanding Hallucinations in Diffusion Models through Mode Interpolation},
author={Sumukh K Aithal and Pratyush Maini and Zachary Chase Lipton and J Zico Kolter},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aNTnHBkw4T}
} | Colloquially speaking, image generation models based upon diffusion processes are frequently said to exhibit ''hallucinations'' samples that could never occur in the training data. But where do such hallucinations come from? In this paper, we study a particular failure mode in diffusion models, which we term ***mode interpolation***. Specifically, we find that diffusion models smoothly ``interpolate'' between nearby data modes in the training set, to generate samples that are completely outside the support of the original training distribution; this phenomenon leads diffusion models to generate artifacts that never existed in real data (i.e., hallucinations). We systematically study the reasons for, and the manifestation of this phenomenon. Through experiments on 1D and 2D Gaussians, we show how a discontinuous loss landscape in the diffusion model's decoder leads to a region where any smooth approximation will cause such hallucinations. Through experiments on artificial datasets with various shapes, we show how hallucination leads to the generation of combinations of shapes that never existed. We extend the validity of mode interpolation in real-world datasets by explaining the unexpected generation of images with additional or missing fingers similar to those produced by popular text-to-image generative models. Finally, we show that diffusion models in fact ***know*** when they go out of support and hallucinate. This is captured by the high variance in the trajectory of the generated sample towards the final few backward sampling process. Using a simple metric to capture this variance, we can remove over 95\% of hallucinations at generation time. We conclude our exploration by showing the implications of such hallucination (and its removal) on the collapse (and stabilization) of recursive training on synthetic data with experiments on datasets like MNIST . | Understanding Hallucinations in Diffusion Models through Mode Interpolation | [
"Sumukh K Aithal",
"Pratyush Maini",
"Zachary Chase Lipton",
"J Zico Kolter"
] | NeurIPS.cc/2024/Conference | 2406.09358 | [
"https://github.com/locuslab/diffusion-model-hallucination"
] | https://huggingface.co/papers/2406.09358 | 3 | 4 | 1 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aNQWRHyh15 | @inproceedings{
kook2024inandout,
title={In-and-Out: Algorithmic Diffusion for Sampling Convex Bodies},
author={Yunbum Kook and Santosh Vempala and Matthew Shunshi Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aNQWRHyh15}
} | We present a new random walk for uniformly sampling high-dimensional convex bodies. It achieves state-of-the-art runtime complexity with stronger guarantees on the output than previously known, namely in Rényi divergence (which implies TV, $\mathcal{W}_2$, KL, $\chi^2$). The proof departs from known approaches for polytime algorithms for the problem - we utilize a stochastic diffusion perspective to show contraction to the target distribution with the rate of convergence determined by functional isoperimetric constants of the stationary density. | In-and-Out: Algorithmic Diffusion for Sampling Convex Bodies | [
"Yunbum Kook",
"Santosh Vempala",
"Matthew Shunshi Zhang"
] | NeurIPS.cc/2024/Conference | 2405.01425 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=aNHEqFMS0N | @inproceedings{
wu2024an,
title={An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding},
author={Tong Wu and Yanpeng Zhao and Zilong Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aNHEqFMS0N}
} | Recently, many methods have been developed to extend the context length of pre-trained large language models (LLMs), but they often require fine-tuning at the target length ($\gg4K$) and struggle to effectively utilize information from the middle part of the context. To address these issues, we propose $\textbf{C}$ontinuity-$\textbf{R}$elativity ind$\textbf{E}$xing with g$\textbf{A}$ussian $\textbf{M}$iddle ($\texttt{CREAM}$), which interpolates positional encodings by manipulating position indices. Apart from being simple, $\texttt{CREAM}$ is training-efficient: it only requires fine-tuning at the pre-trained context window (e.g., Llama 2-4K) and can extend LLMs to a much longer target context length (e.g., 256K). To ensure that the model focuses more on the information in the middle, we introduce a truncated Gaussian to encourage sampling from the middle part of the context during fine-tuning, thus alleviating the ''Lost-in-the-Middle'' problem faced by long-context LLMs. Experimental results show that $\texttt{CREAM}$ successfully extends LLMs to the target length for both Base and Chat versions of $\texttt{Llama2-7B}$ with ``Never Miss A Beat''. Our code is publicly available at https://github.com/bigai-nlco/cream. | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding | [
"Tong Wu",
"Yanpeng Zhao",
"Zilong Zheng"
] | NeurIPS.cc/2024/Conference | 2406.07138 | [
"https://github.com/bigai-nlco/cream"
] | https://huggingface.co/papers/2406.07138 | 1 | 1 | 1 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aLzA7MSc6Y | @inproceedings{
tran2024symmetric,
title={Symmetric Linear Bandits with Hidden Symmetry},
author={Nam Phuong Tran and The-Anh Ta and Debmalya Mandal and Long Tran-Thanh},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aLzA7MSc6Y}
} | High-dimensional linear bandits with low-dimensional structure have received considerable attention in recent studies due to their practical significance. The most common structure in the literature is sparsity. However, it may not be available in practice. Symmetry, where the reward is invariant under certain groups of transformations on the set of arms, is another important inductive bias in the high-dimensional case that covers many standard structures, including sparsity. In this work, we study high-dimensional symmetric linear bandits where the symmetry is hidden from the learner, and the correct symmetry needs to be learned in an online setting. We examine the structure of a collection of hidden symmetry and provide a method based on model selection within the collection of low-dimensional subspaces. Our algorithm achieves a regret bound of $ O(d_0^{2/3} T^{2/3} \log(d))$, where $d$ is the ambient dimension which is potentially very large, and $d_0$ is the dimension of the true low-dimensional subspace such that $d_0 \ll d$. With an extra assumption on well-separated models, we can further improve the regret to $ O(d_0 \sqrt{T\log(d)} )$. | Symmetric Linear Bandits with Hidden Symmetry | [
"Nam Phuong Tran",
"The-Anh Ta",
"Debmalya Mandal",
"Long Tran-Thanh"
] | NeurIPS.cc/2024/Conference | 2405.13899 | [
"https://github.com/namtrankekl/symmetric-linear-bandit-with-hidden-symmetry"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aJx9onwsR4 | @inproceedings{
saxena2024predicting,
title={Predicting the Performance of Foundation Models via Agreement-on-the-Line},
author={Rahul Saxena and Taeyoun Kim and Aman Mehra and Christina Baek and J Zico Kolter and Aditi Raghunathan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aJx9onwsR4}
} | Estimating the out-of-distribution performance in regimes where labels are scarce is critical to safely deploy foundation models. Recently, it was shown that ensembles of neural networks observe the phenomena "agreement-on-the-line", which can be leveraged to reliably predict OOD performance without labels. However, in contrast to classical neural networks that are trained on in-distribution data from scratch for numerous epochs, foundation models undergo minimal finetuning from heavily pretrained weights, which may reduce the ensemble diversity needed to observe agreement-on-the-line. In our work, we demonstrate that when lightly finetuning multiple runs from a $\textit{single}$ foundation model, the choice of randomness during training (linear head initialization, data ordering, and data subsetting) can lead to drastically different levels of agreement-on-the-line in the resulting ensemble. Surprisingly, only random head initialization is able to reliably induce agreement-on-the-line in finetuned foundation models across vision and language benchmarks. Second, we demonstrate that ensembles of $\textit{multiple}$ foundation models pretrained on different datasets but finetuned on the same task can also show agreement-on-the-line. In total, by careful construction of a diverse ensemble, we can utilize agreement-on-the-line-based methods to predict the OOD performance of foundation models with high precision. | Predicting the Performance of Foundation Models via Agreement-on-the-Line | [
"Rahul Saxena",
"Taeyoun Kim",
"Aman Mehra",
"Christina Baek",
"J Zico Kolter",
"Aditi Raghunathan"
] | NeurIPS.cc/2024/Conference | 2404.01542 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aJGKs7QOZM | @inproceedings{
christodoulou2024mechanism,
title={Mechanism design augmented with output advice},
author={George Christodoulou and Alkmini Sgouritsa and Ioannis Vlachos},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aJGKs7QOZM}
} | Our work revisits the design of mechanisms via the learning-augmented framework. In this model, the algorithm is enhanced with imperfect (machine-learned) information concerning the input, usually referred to as prediction. The goal is to design algorithms whose performance degrades gently as a function of the prediction error and, in particular, perform well if the prediction is accurate, but also provide a worst-case guarantee under any possible error. This framework has been successfully applied recently to various mechanism design settings, where in most cases the mechanism is provided with a prediction about the types of the players.
We adopt a perspective in which the mechanism is provided with an output recommendation. We make no assumptions about the quality of the suggested outcome, and the goal is to use the recommendation to design mechanisms with low approximation guarantees whenever the recommended outcome is reasonable, but at the same time to provide worst-case guarantees whenever the recommendation significantly deviates from the optimal one. We propose a generic, universal measure, which we call quality of recommendation, to evaluate mechanisms across various information settings. We demonstrate how this new metric can provide refined analysis in existing results.
This model introduces new challenges, as the mechanism receives limited information comparing to settings that use predictions about the types of the agents. We study, through this lens, several well-studied mechanism design paradigms, devising new mechanisms, but also providing refined analysis for existing ones, using as a metric the quality of recommendation. We complement our positive results, by exploring the limitations of known classes of strategyproof mechanisms that can be devised using output recommendation. | Mechanism design augmented with output advice | [
"George Christodoulou",
"Alkmini Sgouritsa",
"Ioannis Vlachos"
] | NeurIPS.cc/2024/Conference | 2406.14165 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=aJDGfynRw7 | @inproceedings{
zhang2024iwbvt,
title={{IWBVT}: Instance Weighting-based Bias-Variance Trade-off for Crowdsourcing},
author={Wenjun Zhang and Liangxiao Jiang and Chaoqun Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aJDGfynRw7}
} | In recent years, a large number of algorithms for label integration and noise correction have been proposed to infer the unknown true labels of instances in crowdsourcing. They have made great advances in improving the label quality of crowdsourced datasets. However, due to the presence of intractable instances, these algorithms are usually not as significant in improving the model quality as they are in improving the label quality. To improve the model quality, this paper proposes an instance weighting-based bias-variance trade-off (IWBVT) approach. IWBVT at first proposes a novel instance weighting method based on the complementary set and entropy, which mitigates the impact of intractable instances and thus makes the bias and variance of trained models closer to the unknown true results. Then, IWBVT performs probabilistic loss regressions based on the bias-variance decomposition, which achieves the bias-variance trade-off and thus reduces the generalization error of trained models. Experimental results indicate that IWBVT can serve as a universal post-processing approach to significantly improving the model quality of existing state-of-the-art label integration algorithms and noise correction algorithms. | IWBVT: Instance Weighting-based Bias-Variance Trade-off for Crowdsourcing | [
"Wenjun Zhang",
"Liangxiao Jiang",
"Chaoqun Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aIyNLWXuDO | @inproceedings{
mcleish2024transformers,
title={Transformers Can Do Arithmetic with the Right Embeddings},
author={Sean Michael McLeish and Arpit Bansal and Alex Stein and Neel Jain and John Kirchenbauer and Brian R. Bartoldson and Bhavya Kailkhura and Abhinav Bhatele and Jonas Geiping and Avi Schwarzschild and Tom Goldstein},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aIyNLWXuDO}
} | The poor performance of transformers on arithmetic tasks seems to stem in large part from their inability to keep track of the exact position of each digit inside of a large span of digits. We mend this problem by adding an embedding to each digit that encodes its position relative to the start of the number. In addition to the boost these embeddings provide on their own, we show that this fix enables architectural modifications such as input injection and recurrent layers to improve performance even further.
With positions resolved, we can study the logical extrapolation ability of transformers. Can they solve arithmetic problems that are larger and more complex than those in their training data? We find that training on only 20 digit numbers with a single GPU for one day, we can reach state-of-the-art performance, achieving up to 99% accuracy on 100 digit addition problems. Finally, we show that these gains in numeracy also unlock improvements on other multi-step reasoning tasks including sorting and multiplication. | Transformers Can Do Arithmetic with the Right Embeddings | [
"Sean Michael McLeish",
"Arpit Bansal",
"Alex Stein",
"Neel Jain",
"John Kirchenbauer",
"Brian R. Bartoldson",
"Bhavya Kailkhura",
"Abhinav Bhatele",
"Jonas Geiping",
"Avi Schwarzschild",
"Tom Goldstein"
] | NeurIPS.cc/2024/Conference | 2405.17399 | [
"https://github.com/mcleish7/arithmetic"
] | https://huggingface.co/papers/2405.17399 | 11 | 51 | 2 | 11 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aIuByRyHhV | @inproceedings{
yan2024rethinking,
title={Rethinking Parity Check Enhanced Symmetry-Preserving Ansatz},
author={Ge Yan and Mengfei Ran and Ruocheng Wang and Kaisen Pan and Junchi Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aIuByRyHhV}
} | With the arrival of the Noisy Intermediate-Scale Quantum (NISQ) era, Variational Quantum Algorithms (VQAs) have emerged to obtain possible quantum advantage. In particular, how to effectively incorporate hard constraints in VQAs remains a critical and open question. In this paper, we manage to combine the Hamming Weight Preserving ansatz with a topological-aware parity check on physical qubits to enforce error mitigation and further hard constraints. We demonstrate the combination significantly outperforms peer VQA methods on both quantum chemistry problems and constrained combinatorial optimization problems e.g. Quadratic Assignment Problem. Intensive experimental results on both simulators and superconducting quantum processors are provided to verify that the combination of HWP ansatz with parity check is among the most promising candidates to demonstrate quantum advantages in the NISQ era to solve more realistic problems. | Rethinking Parity Check Enhanced Symmetry-Preserving Ansatz | [
"Ge Yan",
"Mengfei Ran",
"Ruocheng Wang",
"Kaisen Pan",
"Junchi Yan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aIeXn5103e | @inproceedings{
bi2024samba,
title={Samba: Severity-aware Recurrent Modeling for Cross-domain Medical Image Grading},
author={Qi Bi and Jingjun Yi and Hao Zheng and Wei Ji and Haolan Zhan and Yawen Huang and Yuexiang Li and Yefeng Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aIeXn5103e}
} | Disease grading is a crucial task in medical image analysis. Due to the continuous progression of diseases, i.e., the variability within the same level and the similarity between adjacent stages, accurate grading is highly challenging.
Furthermore, in real-world scenarios, models trained on limited source domain datasets should also be capable of handling data from unseen target domains.
Due to the cross-domain variants, the feature distribution between source and unseen target domains can be dramatically different, leading to a substantial decrease in model performance.
To address these challenges in cross-domain disease grading, we propose a Severity-aware Recurrent Modeling (Samba) method in this paper.
As the core objective of most staging tasks is to identify the most severe lesions, which may only occupy a small portion of the image, we propose to encode image patches in a sequential and recurrent manner.
Specifically, a state space model is tailored to store and transport the severity information by hidden states.
Moreover, to mitigate the impact of cross-domain variants, an Expectation-Maximization (EM) based state recalibration mechanism is designed to map the patch embeddings into a more compact space.
We model the feature distributions of different lesions through the Gaussian Mixture Model (GMM) and reconstruct the intermediate features based on learnable severity bases.
Extensive experiments show the proposed Samba outperforms the VMamba baseline by an average accuracy of 23.5\%, 5.6\% and 4.1\% on the cross-domain grading of fatigue fracture, breast cancer and diabetic retinopathy, respectively.
Source code is available at \url{https://github.com/BiQiWHU/Samba}. | Samba: Severity-aware Recurrent Modeling for Cross-domain Medical Image Grading | [
"Qi Bi",
"Jingjun Yi",
"Hao Zheng",
"Wei Ji",
"Haolan Zhan",
"Yawen Huang",
"Yuexiang Li",
"Yefeng Zheng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aIPwlkdOut | @inproceedings{
li2024enhancing,
title={Enhancing Preference-based Linear Bandits via Human Response Time},
author={Shen Li and Yuyang Zhang and Zhaolin Ren and Claire Liang and Na Li and Julie Shah},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aIPwlkdOut}
} | Interactive preference learning systems present humans with queries as pairs of options; humans then select their preferred choice, allowing the system to infer preferences from these binary choices. While binary choice feedback is simple and widely used, it offers limited information about preference strength. To address this, we leverage human response times, which inversely correlate with preference strength, as complementary information. We introduce a computationally efficient method based on the EZ-diffusion model, combining choices and response times to estimate the underlying human utility function. Theoretical and empirical comparisons with traditional choice-only estimators show that for queries where humans have strong preferences (i.e., "easy" queries), response times provide valuable complementary information and enhance utility estimates. We integrate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that incorporating response times significantly accelerates preference learning. | Enhancing Preference-based Linear Bandits via Human Response Time | [
"Shen Li",
"Yuyang Zhang",
"Zhaolin Ren",
"Claire Liang",
"Na Li",
"Julie Shah"
] | NeurIPS.cc/2024/Conference | 2409.05798 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=aGqldlOxxY | @inproceedings{
wang2024segment,
title={Segment Anything without Supervision},
author={Xudong Wang and Jingfeng Yang and Trevor Darrell},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aGqldlOxxY}
} | The Segmentation Anything Model (SAM) requires labor-intensive data labeling. We present Unsupervised SAM (UnSAM) for promptable and automatic whole-image segmentation that does not require human annotations. UnSAM utilizes a divide-and-conquer strategy to “discover” the hierarchical structure of visual scenes. We first leverage top-down clustering methods to partition an unlabeled image into instance/semantic level segments. For all pixels within a segment, a bottom-up clustering method is employed to iteratively merge them into larger groups, thereby forming a hierarchical structure. These unsupervised multi-granular masks are then utilized to supervise model training. Evaluated across seven popular datasets, UnSAM achieves competitive results with the supervised counterpart SAM, and surpasses the previous state-of-the-art in unsupervised segmentation by 11% in terms of AR. Moreover, we show that supervised SAM can also benefit from our self-supervised labels. By integrating our unsupervised pseudo masks into SA-1B’s ground-truth masks and training UnSAM with only 1% of SA-1B, a lightly semi-supervised UnSAM can often segment entities overlooked by supervised SAM, exceeding SAM’s AR by over 6.7% and AP by 3.9% on SA-1B. | Segment Anything without Supervision | [
"Xudong Wang",
"Jingfeng Yang",
"Trevor Darrell"
] | NeurIPS.cc/2024/Conference | 2406.20081 | [
"https://github.com/frank-xwang/unsam"
] | https://huggingface.co/papers/2406.20081 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aFWx1N84Fe | @inproceedings{
bl{\"o}cker2024the,
title={The Map Equation Goes Neural: Mapping Network Flows with Graph Neural Networks},
author={Christopher Bl{\"o}cker and Chester Tan and Ingo Scholtes},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aFWx1N84Fe}
} | Community detection is an essential tool for unsupervised data exploration and revealing the organisational structure of networked systems. With a long history in network science, community detection typically relies on objective functions, optimised with custom-tailored search algorithms, but often without leveraging recent advances in deep learning. Recently, first works have started incorporating such objectives into loss functions for deep graph clustering and pooling. We consider the map equation, a popular information-theoretic objective function for unsupervised community detection, and express it in differentiable tensor form for optimisation through gradient descent. Our formulation turns the map equation compatible with any neural network architecture, enables end-to-end learning, incorporates node features, and chooses the optimal number of clusters automatically, all without requiring explicit regularisation. Applied to unsupervised graph clustering tasks, we achieve competitive performance against state-of-the-art deep graph clustering baselines in synthetic and real-world datasets. | The Map Equation Goes Neural: Mapping Network Flows with Graph Neural Networks | [
"Christopher Blöcker",
"Chester Tan",
"Ingo Scholtes"
] | NeurIPS.cc/2024/Conference | 2310.01144 | [
"https://github.com/chrisbloecker/neuromap"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aFP24eYpWh | @inproceedings{
allingham2024a,
title={A Generative Model of Symmetry Transformations},
author={James Urquhart Allingham and Bruno Kacper Mlodozeniec and Shreyas Padhy and Javier Antoran and David Krueger and Richard E. Turner and Eric Nalisnick and Jos{\'e} Miguel Hern{\'a}ndez-Lobato},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aFP24eYpWh}
} | Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.
While recent advancements have been made in learning those symmetries directly from the dataset, most of this work has focused on the discriminative setting.
In this paper, we take inspiration from group theoretic ideas to construct a generative model that explicitly aims to capture the data's approximate symmetries.
This results in a model that, given a prespecified broad set of possible symmetries, learns to what extent, if at all, those symmetries are actually present.
Our model can be seen as a generative process for data augmentation.
We provide a simple algorithm for learning our generative model and empirically demonstrate its ability to capture symmetries under affine and color transformations, in an interpretable way.
Combining our symmetry model with standard generative models results in higher marginal test-log-likelihoods and improved data efficiency. | A Generative Model of Symmetry Transformations | [
"James Urquhart Allingham",
"Bruno Kacper Mlodozeniec",
"Shreyas Padhy",
"Javier Antoran",
"David Krueger",
"Richard E. Turner",
"Eric Nalisnick",
"José Miguel Hernández-Lobato"
] | NeurIPS.cc/2024/Conference | 2403.01946 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aFOdln7jBV | @inproceedings{
cao2024an,
title={An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization},
author={Jincheng Cao and Ruichen Jiang and Erfan Yazdandoost Hamedani and Aryan Mokhtari},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aFOdln7jBV}
} | In this paper, we focus on simple bilevel optimization problems, where we minimize a convex smooth objective function over the optimal solution set of another convex smooth constrained optimization problem. We present a novel bilevel optimization method that locally approximates the solution set of the lower-level problem using a cutting plane approach and employs an accelerated gradient-based update to reduce the upper-level objective function over the approximated solution set. We measure the performance of our method in terms of suboptimality and infeasibility errors and provide non-asymptotic convergence guarantees for both error criteria. Specifically, when the feasible set is compact, we show that our method requires at most $\mathcal{O}(\max\\{1/\sqrt{\epsilon_{f}}, 1/\epsilon_g\\})$ iterations to find a solution that is $\epsilon_f$-suboptimal and $\epsilon_g$-infeasible. Moreover, under the additional assumption that the lower-level objective satisfies the $r$-th Hölderian error bound, we show that our method achieves an iteration complexity of $\mathcal{O}(\max\\{\epsilon_{f}^{-\frac{2r-1}{2r}},\epsilon_{g}^{-\frac{2r-1}{2r}}\\})$, which matches the optimal complexity of single-level convex constrained optimization when $r=1$. | An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization | [
"Jincheng Cao",
"Ruichen Jiang",
"Erfan Yazdandoost Hamedani",
"Aryan Mokhtari"
] | NeurIPS.cc/2024/Conference | 2402.08097 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aFB97F8QSF | @inproceedings{
cohen2024plantandsteal,
title={Plant-and-Steal: Truthful Fair Allocations via Predictions},
author={Ilan Reuven Cohen and Alon Eden and Talya Eden and Arsen Vasilyan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aFB97F8QSF}
} | We study truthful mechanisms for approximating the Maximin-Share (MMS) allocation of agents with additive valuations for indivisible goods. Algorithmically, constant factor approximations exist for the problem for any number of agents. When adding incentives to the mix, a jarring result by Amanatidis, Birmpas, Christodoulou, and Markakis [EC 2017] shows that the best possible approximation for two agents and $m$ items is $\lfloor \frac{m}{2} \rfloor$. We adopt a learning-augmented framework to investigate what is possible when some prediction on the input is given. For two agents, we give a truthful mechanism that takes agents' ordering over items as prediction. When the prediction is accurate, we give a $2$-approximation to the MMS (consistency), and when the prediction is off, we still get an $\lceil \frac{m}{2} \rceil$-approximation to the MMS (robustness). We further show that the mechanism's performance degrades gracefully in the number of ``mistakes" in the prediction; i.e., we interpolate (up to constant factors) between the two extremes: when there are no mistakes, and when there is a maximum number of mistakes. We also show an impossibility result on the obtainable consistency for mechanisms with finite robustness. For the general case of $n\ge 2$ agents, we give a 2-approximation mechanism for accurate predictions, with relaxed fallback guarantees. Finally, we give experimental results which illustrate when different components of our framework, made to insure consistency and robustness, come into play. | Plant-and-Steal: Truthful Fair Allocations via Predictions | [
"Ilan Reuven Cohen",
"Alon Eden",
"Talya Eden",
"Arsen Vasilyan"
] | NeurIPS.cc/2024/Conference | 2406.07024 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aDQlAz09dS | @inproceedings{
zhang2024efficient,
title={Efficient Contextual {LLM} Cascades through Budget-Constrained Policy Learning},
author={Xuechen Zhang and Zijian Huang and Ege Onur Taga and Carlee Joe-Wong and Samet Oymak and Jiasi Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aDQlAz09dS}
} | Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers. Each LLM offering has different inference accuracy, monetary cost, and latency, and their accuracy further depends on the exact wording of the question (i.e., the specific prompt). At the same time, users often have a limit on monetary budget and latency to answer all their questions, and they do not know which LLMs to choose for each question to meet their accuracy and long term budget requirements. To navigate this rich design space, we propose TREACLE (Thrifty Reasoning via Context-Aware LLM and Prompt Selection), a reinforcement learning policy that jointly selects the model and prompting scheme while respecting the user's monetary cost and latency constraints. TREACLE uses the problem context, including question text embeddings (reflecting the type or difficulty of a query) and the response history (reflecting the consistency of previous responses) to make smart decisions. Our evaluations on standard reasoning datasets (GSM8K, CSQA, and LLC) with various LLMs and prompts show that TREACLE enables cost savings of up to 85% compared to baselines, while maintaining high accuracy. Importantly, it provides the user with the ability to gracefully trade off accuracy for cost. | Efficient Contextual LLM Cascades through Budget-Constrained Policy Learning | [
"Xuechen Zhang",
"Zijian Huang",
"Ege Onur Taga",
"Carlee Joe-Wong",
"Samet Oymak",
"Jiasi Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aCcHVnwNlf | @inproceedings{
tsfadia2024on,
title={On Differentially Private Subspace Estimation in a Distribution-Free Setting},
author={Eliad Tsfadia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aCcHVnwNlf}
} | Private data analysis faces a significant challenge known as the curse of dimensionality, leading to increased costs. However, many datasets possess an inherent low-dimensional structure. For instance, during optimization via gradient descent, the gradients frequently reside near a low-dimensional subspace. If the low-dimensional structure could be privately identified using a small amount of points, we could avoid paying for the high ambient dimension.
On the negative side, Dwork, Talwar, Thakurta, and Zhang (STOC 2014) proved that privately estimating subspaces, in general, requires an amount of points that has a polynomial dependency on the dimension. However, their bounds do not rule out the possibility to reduce the number of points for "easy" instances. Yet, providing a measure that captures how much a given dataset is "easy" for this task turns out to be challenging, and was not properly addressed in prior works.
Inspired by the work of Singhal and Steinke (NeurIPS 2021), we provide the first measures that quantify "easiness" as a function of multiplicative singular-value gaps in the input dataset, and support them with new upper and lower bounds. In particular, our results determine the first types of gaps that are sufficient and necessary for estimating a subspace with an amount of points that is independent of the dimension. Furthermore, we realize our upper bounds using a practical algorithm and demonstrate its advantage in high-dimensional regimes compared to prior approaches. | On Differentially Private Subspace Estimation in a Distribution-Free Setting | [
"Eliad Tsfadia"
] | NeurIPS.cc/2024/Conference | 2402.06465 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aCaspFfAhG | @inproceedings{
maran2024bandits,
title={Bandits with Ranking Feedback},
author={Davide Maran and Francesco Bacchiocchi and Francesco Emanuele Stradi and Matteo Castiglioni and Nicola Gatti and Marcello Restelli},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aCaspFfAhG}
} | In this paper, we introduce a novel variation of multi-armed bandits called bandits with ranking feedback. Unlike traditional bandits, this variation provides feedback to the learner that allows them to rank the arms based on previous pulls, without quantifying numerically the difference in performance. This type of feedback is well-suited for scenarios where the arms' values cannot be precisely measured using metrics such as monetary scores, probabilities, or occurrences. Common examples include human preferences in matchmaking problems. Furthermore, its investigation answers the theoretical question on how numerical rewards are crucial in bandit settings. In particular, we study the problem of designing no-regret algorithms with ranking feedback both in the stochastic and adversarial settings. We show that, with stochastic rewards, differently from what happens with non-ranking feedback, no algorithm can suffer a logarithmic regret in the time horizon $T$ in the instance-dependent case. Furthermore, we provide two algorithms. The first, namely DREE, guarantees a superlogarithmic regret in $T$ in the instance-dependent case thus matching our lower bound, while the second, namely R-LPE, guarantees a regret of $\mathcal{\widetilde O}(\sqrt{T})$ in the instance-independent case. Remarkably, we show that no algorithm can have an optimal regret bound in both instance-dependent and instance-independent cases. Finally, we prove that no algorithm can achieve a sublinear regret when the rewards are adversarial. | Bandits with Ranking Feedback | [
"Davide Maran",
"Francesco Bacchiocchi",
"Francesco Emanuele Stradi",
"Matteo Castiglioni",
"Nicola Gatti",
"Marcello Restelli"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aCAb1qNXI0 | @inproceedings{
fang2024hierarchical,
title={Hierarchical Federated Learning with Multi-Timescale Gradient Correction},
author={Wenzhi Fang and Dong-Jun Han and Evan Chen and Shiqiang Wang and Christopher Brinton},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aCAb1qNXI0}
} | While traditional federated learning (FL) typically focuses on a star topology where clients are directly connected to a central server, real-world distributed systems often exhibit hierarchical architectures. Hierarchical FL (HFL) has emerged as a promising solution to bridge this gap, leveraging aggregation points at multiple levels of the system. However, existing algorithms for HFL encounter challenges in dealing with multi-timescale model drift, i.e., model drift occurring across hierarchical levels of data heterogeneity. In this paper, we propose a multi-timescale gradient correction (MTGC) methodology to resolve this issue. Our key idea is to introduce distinct control variables to (i) correct the client gradient towards the group gradient, i.e., to reduce client model drift caused by local updates based on individual datasets, and (ii) correct the group gradient towards the global gradient, i.e., to reduce group model drift caused by FL over clients within the group. We analytically characterize the convergence behavior of MTGC under general non-convex settings, overcoming challenges associated with couplings between correction terms. We show that our convergence bound is immune to the extent of data heterogeneity, confirming the stability of the proposed algorithm against multi-level non-i.i.d. data. Through extensive experiments on various datasets and models, we validate the effectiveness of MTGC in diverse HFL settings. The code for this project is available at https://github.com/wenzhifang/MTGC. | Hierarchical Federated Learning with Multi-Timescale Gradient Correction | [
"Wenzhi Fang",
"Dong-Jun Han",
"Evan Chen",
"Shiqiang Wang",
"Christopher Brinton"
] | NeurIPS.cc/2024/Conference | 2409.18448 | [
"https://github.com/wenzhifang/mtgc"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aC9mB1PqYJ | @inproceedings{
kumar2024learning,
title={Learning Mixtures of Unknown Causal Interventions},
author={Abhinav Kumar and Kirankumar Shiragur and Caroline Uhler},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aC9mB1PqYJ}
} | The ability to conduct interventions plays a pivotal role in learning causal relationships among variables, thus facilitating applications across diverse scientific disciplines such as genomics, economics, and machine learning. However, in many instances within these applications, the process of generating interventional data is subject to noise: rather than data being sampled directly from the intended interventional distribution, interventions often yield data sampled from a blend of both intended and unintended interventional distributions.
We consider the fundamental challenge of disentangling mixed interventional and observational data within linear Structural Equation Models (SEMs) with Gaussian additive noise without the knowledge of the true causal graph. We demonstrate that conducting interventions, whether do or soft, yields distributions with sufficient diversity and properties conducive to efficiently recovering each component within the mixture. Furthermore, we establish that the sample complexity required to disentangle mixed data inversely correlates with the extent of change induced by an intervention in the equations governing the affected variable values. As a result, the causal graph can be identified up to its interventional Markov Equivalence Class, similar to scenarios where no noise influences the generation of interventional data. We further support our theoretical findings by conducting simulations wherein we perform causal discovery from such mixed data. | Learning Mixtures of Unknown Causal Interventions | [
"Abhinav Kumar",
"Kirankumar Shiragur",
"Caroline Uhler"
] | NeurIPS.cc/2024/Conference | 2411.00213 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aBtcfcrjM3 | @inproceedings{
zangrando2024geometryaware,
title={Geometry-aware training of factorized layers in tensor Tucker format},
author={Emanuele Zangrando and Steffen Schotth{\"o}fer and Gianluca Ceruti and Jonas Kusch and Francesco Tudisco},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aBtcfcrjM3}
} | Reducing parameter redundancies in neural network architectures is crucial for achieving feasible computational and memory requirements during train and inference of large networks. Given its easy implementation and flexibility, one promising approach is layer factorization, which reshapes weight tensors into a matrix format and parameterizes it as the product of two rank-r matrices. However, this family of approaches often requires an initial full-model warm-up phase, prior knowledge of a feasible rank, and it is sensitive to parameter initialization.
In this work, we introduce a novel approach to train the factors of a Tucker decomposition of the weight tensors. Our training proposal proves to be optimal in locally approximating the original unfactorized dynamics and stable for the initialization. Furthermore, the rank of each mode is dynamically updated during training.
We provide a theoretical analysis of the algorithm, showing convergence, approximation and local descent guarantees. The method's performance is further illustrated through a variety of experiments, showing remarkable training compression rates and comparable or even better performance than the full baseline and alternative layer factorization strategies. | Geometry-aware training of factorized layers in tensor Tucker format | [
"Emanuele Zangrando",
"Steffen Schotthöfer",
"Gianluca Ceruti",
"Jonas Kusch",
"Francesco Tudisco"
] | NeurIPS.cc/2024/Conference | 2305.19059 | [
""
] | https://huggingface.co/papers/2305.19059 | 1 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aBpxukZS37 | @inproceedings{
dewan2024diffusion,
title={Diffusion {PID}: Interpreting Diffusion via Partial Information Decomposition},
author={Shaurya Rajat Dewan and Rushikesh Zawar and Prakanshul Saxena and Yingshan Chang and Andrew Luo and Yonatan Bisk},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aBpxukZS37}
} | Text-to-image diffusion models have made significant progress in generating naturalistic images from textual inputs, and demonstrate the capacity to learn and represent complex visual-semantic relationships. While these diffusion models have achieved remarkable success, the underlying mechanisms driving their performance are not yet fully accounted for, with many unanswered questions surrounding what they learn, how they represent visual-semantic relationships, and why they sometimes fail to generalize. Our work presents Diffusion Partial Information Decomposition (DiffusionPID), a novel technique that applies information-theoretic principles to decompose the input text prompt into its elementary components, enabling a detailed examination of how individual tokens and their interactions shape the generated image. We introduce a formal approach to analyze the uniqueness, redundancy, and synergy terms by applying PID to the denoising model at both the image and pixel level. This approach enables us to characterize how individual tokens and their interactions affect the model output. We first present a fine-grained analysis of characteristics utilized by the model to uniquely localize specific concepts, we then apply our approach in bias analysis and show it can recover gender and ethnicity biases. Finally, we use our method to visually characterize word ambiguity and similarity from the model’s perspective and illustrate the efficacy of our method for prompt intervention. Our results show that PID is a potent tool for evaluating and diagnosing text-to-image diffusion models. Link to project page: https://rbz-99.github.io/Diffusion-PID/. | Diffusion PID: Interpreting Diffusion via Partial Information Decomposition | [
"Shaurya Rajat Dewan",
"Rushikesh Zawar",
"Prakanshul Saxena",
"Yingshan Chang",
"Andrew Luo",
"Yonatan Bisk"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=aBmiyi7iA7 | @inproceedings{
dinh2024hamiltonian,
title={Hamiltonian Monte Carlo on Re{LU} Neural Networks is Inefficient},
author={Vu C. Dinh and Lam Si Tung Ho and Cuong V. Nguyen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aBmiyi7iA7}
} | We analyze the error rates of the Hamiltonian Monte Carlo algorithm with leapfrog integrator for Bayesian neural network inference. We show that due to the non-differentiability of activation functions in the ReLU family, leapfrog HMC for networks with these activation functions has a large local error rate of $\Omega(\epsilon)$ rather than the classical error rate of $\mathcal{O}(\epsilon^3)$. This leads to a higher rejection rate of the proposals, making the method inefficient. We then verify our theoretical findings through empirical simulations as well as experiments on a real-world dataset that highlight the inefficiency of HMC inference on ReLU-based neural networks compared to analytical networks. | Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient | [
"Vu C. Dinh",
"Lam Si Tung Ho",
"Cuong V. Nguyen"
] | NeurIPS.cc/2024/Conference | 2410.22065 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aBP01akha9 | @inproceedings{
nguyen2024scaling,
title={Scaling transformer neural networks for skillful and reliable medium-range weather forecasting},
author={Tung Nguyen and Rohan Shah and Hritik Bansal and Troy Arcomano and Romit Maulik and Veerabhadra Kotamarthi and Ian Foster and Sandeep Madireddy and Aditya Grover},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aBP01akha9}
} | Weather forecasting is a fundamental problem for anticipating and mitigating the impacts of climate change. Recently, data-driven approaches for weather forecasting based on deep learning have shown great promise, achieving accuracies that are competitive with operational systems. However, those methods often employ complex, customized architectures without sufficient ablation analysis, making it difficult to understand what truly contributes to their success. Here we introduce Stormer, a simple transformer model that achieves state-of-the art performance on weather forecasting with minimal changes to the standard transformer backbone. We identify the key components of Stormer through careful empirical analyses, including weather-specific embedding, randomized dynamics forecast, and pressure-weighted loss. At the core of Stormer is a randomized forecasting objective that trains the model to forecast the weather dynamics over varying time intervals. During inference, this allows us to produce multiple forecasts for a target lead time and combine them to obtain better forecast accuracy. On WeatherBench 2, Stormer performs competitively at short to medium-range forecasts and outperforms current methods beyond 7 days, while requiring orders-of-magnitude less training data and compute. Additionally, we demonstrate Stormer’s favorable scaling properties, showing consistent improvements in forecast accuracy with increases in model size and training tokens. Code and checkpoints are available at https://github.com/tung-nd/stormer. | Scaling transformer neural networks for skillful and reliable medium-range weather forecasting | [
"Tung Nguyen",
"Rohan Shah",
"Hritik Bansal",
"Troy Arcomano",
"Romit Maulik",
"Veerabhadra Kotamarthi",
"Ian Foster",
"Sandeep Madireddy",
"Aditya Grover"
] | NeurIPS.cc/2024/Conference | 2312.03876 | [
"https://github.com/tung-nd/stormer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aBMESB1Ajx | @inproceedings{
natale2024on,
title={On the Sparsity of the Strong Lottery Ticket Hypothesis},
author={Emanuele Natale and Davide Ferre' and Giordano Giambartolomei and Fr{\'e}d{\'e}ric Giroire and Frederik Mallmann-Trenn},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aBMESB1Ajx}
} | Considerable research efforts have recently been made to show that a random neural network $N$ contains subnetworks capable of accurately approximating any given neural network that is sufficiently smaller than $N$, without any training.
This line of research, known as the Strong Lottery Ticket Hypothesis (SLTH), was originally motivated by the weaker Lottery Ticket Hypothesis, which states that a sufficiently large random neural network $N$ contains sparse subnetworks that can be trained efficiently to achieve performance comparable to that of training the entire network $N$.
Despite its original motivation, results on the SLTH have so far not provided any guarantee on the size of subnetworks.
Such limitation is due to the nature of the main technical tool leveraged by these results, the Random Subset Sum (RSS) Problem.
Informally, the RSS Problem asks how large a random i.i.d. sample $\Omega$ should be so that we are able to approximate any number in $[-1,1]$, up to an error of $ \epsilon$, as the sum of a suitable subset of $\Omega$.
We provide the first proof of the SLTH in classical settings, such as dense and equivariant networks, with guarantees on the sparsity of the subnetworks. Central to our results, is the proof of an essentially tight bound on the Random Fixed-Size Subset Sum Problem (RFSS), a variant of the RSS Problem in which we only ask for subsets of a given size, which is of independent interest. | On the Sparsity of the Strong Lottery Ticket Hypothesis | [
"Emanuele Natale",
"Davide Ferre'",
"Giordano Giambartolomei",
"Frédéric Giroire",
"Frederik Mallmann-Trenn"
] | NeurIPS.cc/2024/Conference | 2410.14754 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aAaV4ZbQ9j | @inproceedings{
wei2024navigating,
title={Navigating Chemical Space with Latent Flows},
author={Guanghao Wei and Yining Huang and Chenru Duan and Yue Song and Yuanqi Du},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aAaV4ZbQ9j}
} | Recent progress of deep generative models in the vision and language domain has stimulated significant interest in more structured data generation such as molecules. However, beyond generating new random molecules, efficient exploration and a comprehensive understanding of the vast chemical space are of great importance to molecular science and applications in drug design and materials discovery.
In this paper, we propose a new framework, ChemFlow, to traverse chemical space through navigating the latent space learned by molecule generative models through flows. We introduce a dynamical system perspective that formulates the problem as learning a vector field that transports the mass of the molecular distribution to the region with desired molecular properties or structure diversity.
Under this framework, we unify previous approaches on molecule latent space traversal and optimization and propose alternative competing methods incorporating different physical priors.
We validate the efficacy of ChemFlow on molecule manipulation and single- and multi-objective molecule optimization tasks under both supervised and unsupervised molecular discovery settings.
Codes and demos are publicly available on GitHub at
[https://github.com/garywei944/ChemFlow](https://github.com/garywei944/ChemFlow). | Navigating Chemical Space with Latent Flows | [
"Guanghao Wei",
"Yining Huang",
"Chenru Duan",
"Yue Song",
"Yuanqi Du"
] | NeurIPS.cc/2024/Conference | 2405.03987 | [
"https://github.com/garywei944/chemflow"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=aAR0ejrYw1 | @inproceedings{
chen2024images,
title={Images that Sound: Composing Images and Sounds on a Single Canvas},
author={Ziyang Chen and Daniel Geng and Andrew Owens},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=aAR0ejrYw1}
} | Spectrograms are 2D representations of sound that look very different from the images found in our visual world. And natural images, when played as spectrograms, make unnatural sounds. In this paper, we show that it is possible to synthesize spectrograms that simultaneously look like natural images and sound like natural audio. We call these visual spectrograms *images that sound*. Our approach is simple and zero-shot, and it leverages pre-trained text-to-image and text-to-spectrogram diffusion models that operate in a shared latent space. During the reverse process, we denoise noisy latents with both the audio and image diffusion models in parallel, resulting in a sample that is likely under both models. Through quantitative evaluations and perceptual studies, we find that our method successfully generates spectrograms that align with a desired audio prompt while also taking the visual appearance of a desired image prompt. | Images that Sound: Composing Images and Sounds on a Single Canvas | [
"Ziyang Chen",
"Daniel Geng",
"Andrew Owens"
] | NeurIPS.cc/2024/Conference | 2405.12221 | [
""
] | https://huggingface.co/papers/2405.12221 | 1 | 1 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=a75F45dBHK | @inproceedings{
karami2024orchid,
title={Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling},
author={Mahdi Karami and Ali Ghodsi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a75F45dBHK}
} | In the rapidly evolving field of deep learning, the demand for models that are both expressive and computationally efficient has never been more critical. This paper introduces Orchid, a novel architecture designed to address the quadratic complexity of traditional attention mechanisms without compromising the ability to capture long-range dependencies and in-context learning. At the core of this architecture lies a new data-dependent global convolution layer, which contextually adapts its kernel conditioned on input sequence using a dedicated conditioning neural network. We design two simple conditioning networks that maintain shift equivariance in our data-dependent convolution operation. The dynamic nature of the proposed convolution kernel grants Orchid high expressivity while maintaining quasilinear scalability for long sequences. We evaluate the proposed model across multiple domains, including language modeling and image classification, to highlight its performance and generality. Our experiments demonstrate that this architecture not only outperforms traditional attention-based architectures such as BERT and Vision Transformers with smaller model sizes, but also extends the feasible sequence length beyond the limitations of the dense attention layers. This achievement represents a significant step towards more efficient and scalable deep learning models for sequence modeling. | Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling | [
"Mahdi Karami",
"Ali Ghodsi"
] | NeurIPS.cc/2024/Conference | 2402.18508 | [
""
] | https://huggingface.co/papers/2402.18508 | 1 | 0 | 2 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=a6em980M9x | @inproceedings{
xiao2024amortized,
title={Amortized Fourier Neural Operators},
author={Zipeng Xiao and Siqi Kou and Zhongkai Hao and Bokai Lin and Zhijie Deng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a6em980M9x}
} | Fourier Neural Operators (FNOs) have shown promise for solving partial differential equations (PDEs).
Typically, FNOs employ separate parameters for different frequency modes to specify tunable kernel integrals in Fourier space, which, yet, results in an undesirably large number of parameters when solving high-dimensional PDEs.
A workaround is to abandon the frequency modes exceeding a predefined threshold, but this limits the FNOs' ability to represent high-frequency details and poses non-trivial challenges for hyper-parameter specification.
To address these, we propose AMortized Fourier Neural Operator (AM-FNO), where an amortized neural parameterization of the kernel function is deployed to accommodate arbitrarily many frequency modes using a fixed number of parameters.
We introduce two implementations of AM-FNO, based on the recently developed, appealing Kolmogorov–Arnold Network (KAN) and Multi-Layer Perceptrons (MLPs) equipped with orthogonal embedding functions respectively.
We extensively evaluate our method on diverse datasets from various domains and observe up to 31\% average improvement compared to competing neural operator baselines. | Amortized Fourier Neural Operators | [
"Zipeng Xiao",
"Siqi Kou",
"Zhongkai Hao",
"Bokai Lin",
"Zhijie Deng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=a6HzEu4Kpo | @inproceedings{
jian2024trilevel,
title={Tri-Level Navigator: {LLM}-Empowered Tri-Level Learning for Time Series {OOD} Generalization},
author={Chengtao Jian and Kai Yang and Yang Jiao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a6HzEu4Kpo}
} | Out-of-Distribution (OOD) generalization in machine learning is a burgeoning area of study. Its primary goal is to enhance the adaptability and resilience of machine learning models when faced with new, unseen, and potentially adversarial data that significantly diverges from their original training datasets. In this paper, we investigate time series OOD generalization via pre-trained Large Language Models (LLMs). We first propose a novel \textbf{T}ri-level learning framework for \textbf{T}ime \textbf{S}eries \textbf{O}OD generalization, termed TTSO, which considers both sample-level and group-level uncertainties. This formula offers a fresh theoretic perspective for formulating and analyzing OOD generalization problem. In addition, we provide a theoretical analysis to justify this method is well motivated. We then develop a stratified localization algorithm tailored for this tri-level optimization problem, theoretically demonstrating the guaranteed convergence of the proposed algorithm. Our analysis also reveals that the iteration complexity to obtain an $\epsilon$-stationary point is bounded by O($\frac{1}{\epsilon^{2}}$). Extensive experiments on real-world datasets have been conducted to elucidate the effectiveness of the proposed method. | Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization | [
"Chengtao Jian",
"Kai Yang",
"Yang Jiao"
] | NeurIPS.cc/2024/Conference | 2410.07018 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=a560KLF3v5 | @inproceedings{
draguns2024unelicitable,
title={Unelicitable Backdoors via Cryptographic Transformer Circuits},
author={Andis Draguns and Andrew Gritsevskiy and Sumeet Ramesh Motwani and Christian Schroeder de Witt},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a560KLF3v5}
} | The rapid proliferation of open-source language models significantly increases the risks of downstream backdoor attacks. These backdoors can introduce dangerous behaviours during model deployment and can evade detection by conventional cybersecurity monitoring systems. In this paper, we introduce a novel class of backdoors in transformer models, that, in contrast to prior art, are unelicitable in nature. Unelicitability prevents the defender from triggering the backdoor, making it impossible to properly evaluate ahead of deployment even if given full white-box access and using automated techniques, such as red-teaming or certain formal verification methods. We show that our novel construction is not only unelicitable thanks to using cryptographic techniques, but also has favourable robustness properties.
We confirm these properties in empirical investigations, and provide evidence that our backdoors can withstand state-of-the-art mitigation strategies. Additionally, we expand on previous work by showing that our universal backdoors, while not completely undetectable in white-box settings, can be harder to detect than some existing designs. By demonstrating the feasibility of seamlessly integrating backdoors into transformer models, this paper fundamentally questions the efficacy of pre-deployment detection strategies. This offers new insights into the offence-defence balance in AI safety and security. | Unelicitable Backdoors via Cryptographic Transformer Circuits | [
"Andis Draguns",
"Andrew Gritsevskiy",
"Sumeet Ramesh Motwani",
"Christian Schroeder de Witt"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=a4qT29Levh | @inproceedings{
jiang2024scenediffuser,
title={SceneDiffuser: Efficient and Controllable Driving Simulation Initialization and Rollout},
author={Chiyu Max Jiang and Yijing Bai and Andre Cornman and Christopher Davis and Xiukun Huang and Hong Jeon and Sakshum Kulshrestha and John Wheatley Lambert and Shuangyu Li and Xuanyu Zhou and Carlos Fuertes and Chang Yuan and Mingxing Tan and Yin Zhou and Dragomir Anguelov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a4qT29Levh}
} | Simulation with realistic and interactive agents represents a key task for autonomous vehicle (AV) software development in order to test AV performance in prescribed, often long-tail scenarios. In this work, we propose SceneDiffuser, a scene-level diffusion prior for traffic simulation. We present a singular framework that unifies two key stages of simulation: scene initialization and scene rollout. Scene initialization refers to generating the initial layout for the traffic in a scene, and scene rollout refers to closed-loop simulation for the behaviors of the agents. While diffusion has been demonstrated to be effective in learning realistic, multimodal agent distributions, two open challenges remain: controllability and closed-loop inference efficiency and realism. To this end, to address controllability challenges, we propose generalized hard constraints, a generalized inference-time constraint mechanism that is simple yet effective. To improve closed-loop inference quality and efficiency, we propose amortized diffusion, a novel diffusion denoising paradigm that amortizes the physical cost of denoising over future simulation rollout steps, reducing the cost of per physical rollout step to a single denoising function evaluation, while dramatically reducing closed-loop errors. We demonstrate the effectiveness of our approach on the Waymo Open Dataset, where we are able to generate distributionally realistic scenes, while obtaining competitive performance in the Sim Agents Challenge, surpassing the state-of-the-art in many realism attributes. | SceneDiffuser: Efficient and Controllable Driving Simulation Initialization and Rollout | [
"Chiyu Max Jiang",
"Yijing Bai",
"Andre Cornman",
"Christopher Davis",
"Xiukun Huang",
"Hong Jeon",
"Sakshum Kulshrestha",
"John Wheatley Lambert",
"Shuangyu Li",
"Xuanyu Zhou",
"Carlos Fuertes",
"Chang Yuan",
"Mingxing Tan",
"Yin Zhou",
"Dragomir Anguelov"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=a4cPpx1xYg | @inproceedings{
zhang2024block,
title={Block Sparse Bayesian Learning: A Diversified Scheme},
author={Yanhao Zhang and Zhihan Zhu and Yong Xia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a4cPpx1xYg}
} | This paper introduces a novel prior called Diversified Block Sparse Prior to characterize the widespread block sparsity phenomenon in real-world data. By allowing diversification on intra-block variance and inter-block correlation matrices, we effectively address the sensitivity issue of existing block sparse learning methods to pre-defined block information, which enables adaptive block estimation while mitigating the risk of overfitting. Based on this, a diversified block sparse Bayesian learning method (DivSBL) is proposed, utilizing EM algorithm and dual ascent method for hyperparameter estimation. Moreover, we establish the global and local optimality theory of our model. Experiments validate the advantages of DivSBL over existing algorithms. | Block Sparse Bayesian Learning: A Diversified Scheme | [
"Yanhao Zhang",
"Zhihan Zhu",
"Yong Xia"
] | NeurIPS.cc/2024/Conference | 2402.04646 | [
"https://github.com/yanhaozhang1/divsbl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=a4J7nDLXEM | @inproceedings{
mukherjee2024capturing,
title={Capturing the denoising effect of {PCA} via compression ratio},
author={Chandra Sekhar Mukherjee and Nikhil Deorkar and Jiapeng Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a4J7nDLXEM}
} | Principal component analysis (PCA) is one of the most fundamental tools in machine learning with broad use as a dimensionality reduction and denoising tool. In the later setting, while PCA is known to be effective at subspace recovery and is proven to aid clustering algorithms in some specific settings, its improvement of noisy data is still not well quantified in general.
In this paper, we propose a novel metric called *compression ratio* to capture the effect of PCA on high-dimensional noisy data.
We show that, for data with *underlying community structure*, PCA significantly reduces the distance of data points belonging to the same community while reducing inter-community distance relatively mildly. We explain this phenomenon through both theoretical proofs and experiments on real-world data.
Building on this new metric, we design a straightforward algorithm that could be used to detect outliers. Roughly speaking, we argue that points that have a *lower variance of compression ratio* do not share a *common signal* with others (hence could be considered outliers).
We provide theoretical justification for this simple outlier detection algorithm and use simulations to demonstrate that our method is competitive with popular outlier detection tools. Finally, we run experiments on real-world high-dimension noisy data (single-cell RNA-seq) to show that removing points from these datasets via our outlier detection method improves the accuracy of clustering algorithms. Our method is very competitive with popular outlier detection tools in this task. | Capturing the denoising effect of PCA via compression ratio | [
"Chandra Sekhar Mukherjee",
"Nikhil Deorkar",
"Jiapeng Zhang"
] | NeurIPS.cc/2024/Conference | 2204.10888 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=a3cauWMXNV | @inproceedings{
navarro2024fair,
title={Fair {GLASSO}: Estimating Fair Graphical Models with Unbiased Statistical Behavior},
author={Madeline Navarro and Samuel Rey and Andrei Buciulea and Antonio Marques and Santiago Segarra},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a3cauWMXNV}
} | We propose estimating Gaussian graphical models (GGMs) that are fair with respect to sensitive nodal attributes. Many real-world models exhibit unfair discriminatory behavior due to biases in data. Such discrimination is known to be exacerbated when data is equipped with pairwise relationships encoded in a graph. Additionally, the effect of biased data on graphical models is largely underexplored. We thus introduce fairness for graphical models in the form of two bias metrics to promote balance in statistical similarities across nodal groups with different sensitive attributes. Leveraging these metrics, we present Fair GLASSO, a regularized graphical lasso approach to obtain sparse Gaussian precision matrices with unbiased statistical dependencies across groups. We also propose an efficient proximal gradient algorithm to obtain the estimates. Theoretically, we express the tradeoff between fair and accurate estimated precision matrices. Critically, this includes demonstrating when accuracy can be preserved in the presence of a fairness regularizer. On top of this, we study the complexity of Fair GLASSO and demonstrate that our algorithm enjoys a fast convergence rate. Our empirical validation includes synthetic and real-world simulations that illustrate the value and effectiveness of our proposed optimization problem and iterative algorithm. | Fair GLASSO: Estimating Fair Graphical Models with Unbiased Statistical Behavior | [
"Madeline Navarro",
"Samuel Rey",
"Andrei Buciulea",
"Antonio Marques",
"Santiago Segarra"
] | NeurIPS.cc/2024/Conference | 2406.09513 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=a2ccaXTb4I | @inproceedings{
li2024reconstruction,
title={Reconstruction of Manipulated Garment with Guided Deformation Prior},
author={Ren Li and Corentin Dumery and Zhantao Deng and Pascal Fua},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a2ccaXTb4I}
} | Modeling the shape of garments has received much attention, but most existing approaches assume the garments to be worn by someone, which constrains the range of shapes they can assume. In this work, we address shape recovery when garments are being manipulated instead of worn, which gives rise to an even larger range of possible shapes. To this end, we leverage the implicit sewing patterns (ISP) model for garment modeling and extend it by adding a diffusion-based deformation prior to represent these shapes. To recover 3D garment shapes from incomplete 3D point clouds acquired when the garment is folded, we map the points to UV space, in which our priors are learned, to produce partial UV maps, and then fit the priors to recover complete UV maps and 2D to 3D mappings. Experimental results demonstrate the superior reconstruction accuracy of our method compared to previous ones, especially when dealing with large non-rigid deformations arising from the manipulations. | Reconstruction of Manipulated Garment with Guided Deformation Prior | [
"Ren Li",
"Corentin Dumery",
"Zhantao Deng",
"Pascal Fua"
] | NeurIPS.cc/2024/Conference | 2405.10934 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=a1wf2N967T | @inproceedings{
xie2024graphbased,
title={Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models},
author={Baao Xie and Qiuyu Chen and Yunnan Wang and Zequn Zhang and Xin Jin and Wenjun Zeng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a1wf2N967T}
} | Disentangled representation learning (DRL) aims to identify and decompose underlying factors behind observations, thus facilitating data perception and generation. However, current DRL approaches often rely on the unrealistic assumption that semantic factors are statistically independent. In reality, these factors may exhibit correlations, which off-the-shelf solutions have yet to properly address. To tackle this challenge, we introduce a bidirectional weighted graph-based framework, to learn factorized attributes and their interrelations within complex data. Specifically, we propose a $\beta$-VAE based module to extract factors as the initial nodes of the graph, and leverage the multimodal large language model (MLLM) to discover and rank latent correlations, thereby updating the weighted edges. By integrating these complementary modules, our model successfully achieves fine-grained, practical and unsupervised disentanglement. Experiments demonstrate our method's superior performance in disentanglement and reconstruction. Furthermore, the model inherits enhanced interpretability and generalizability from MLLMs. | Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models | [
"Baao Xie",
"Qiuyu Chen",
"Yunnan Wang",
"Zequn Zhang",
"Xin Jin",
"Wenjun Zeng"
] | NeurIPS.cc/2024/Conference | 2407.18999 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=a17biETKyI | @inproceedings{
joo2024improving,
title={Improving self-training under distribution shifts via anchored confidence with theoretical guarantees},
author={Taejong Joo and Diego Klabjan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=a17biETKyI}
} | Self-training often falls short under distribution shifts due to an increased discrepancy between prediction confidence and actual accuracy. This typically necessitates computationally demanding methods such as neighborhood or ensemble-based label corrections. Drawing inspiration from insights on early learning regularization, we develop a principled method to improve self-training under distribution shifts based on temporal consistency. Specifically, we build an uncertainty-aware temporal ensemble with a simple relative thresholding. Then, this ensemble smooths noisy pseudo labels to promote selective temporal consistency. We show that our temporal ensemble is asymptotically correct and our label smoothing technique can reduce the optimality gap of self-training. Our extensive experiments validate that our approach consistently improves self-training performances by 8% to 16% across diverse distribution shift scenarios without a computational overhead. Besides, our method exhibits attractive properties, such as improved calibration performance and robustness to different hyperparameter choices. | Improving self-training under distribution shifts via anchored confidence with theoretical guarantees | [
"Taejong Joo",
"Diego Klabjan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZzgbUDspzJ | @inproceedings{
zhang2024parameterized,
title={Parameterized Approximation Schemes for Fair-Range Clustering},
author={Zhen Zhang and Xiaohong Chen and Limei Liu and Jie Chen and Junyu Huang and Qilong Feng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZzgbUDspzJ}
} | Fair-range clustering extends classical clustering formulations by associating each data point with one or several demographic labels. It imposes lower and upper bound constraints on the number of opened facilities associated with each label, ensuring fair representation of all demographic groups by the opened facilities. In this paper we focus on the fair-range $k$-median and $k$-means problems in Euclidean spaces. We give $(1+\varepsilon)$-approximation algorithms with fixed-parameter tractable running times for both problems, parameterized by the numbers of opened facilities and demographic labels. For Euclidean metrics, these are the first parameterized approximation schemes for the problems, improving upon the previously known $O(1)$-approximation ratios given by Thejaswi et al. (KDD 2022). | Parameterized Approximation Schemes for Fair-Range Clustering | [
"Zhen Zhang",
"Xiaohong Chen",
"Limei Liu",
"Jie Chen",
"Junyu Huang",
"Qilong Feng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZyR0sRQrDd | @inproceedings{
wang2024opus,
title={{OPUS}: Occupancy Prediction Using a Sparse Set},
author={JiaBao Wang and Zhaojiang Liu and Qiang Meng and Liujiang Yan and Ke Wang and JIE YANG and Wei Liu and Qibin Hou and Ming-Ming Cheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZyR0sRQrDd}
} | Occupancy prediction, aiming at predicting the occupancy status within voxelized 3D environment, is quickly gaining momentum within the autonomous driving community. Mainstream occupancy prediction works first discretize the 3D environment into voxels, then perform classification on such dense grids. However, inspection on sample data reveals that the vast majority of voxels is unoccupied. Performing classification on these empty voxels demands suboptimal computation resource allocation, and reducing such empty voxels necessitates complex algorithm designs. To this end, we present a novel perspective on the occupancy prediction task: formulating it as a streamlined set prediction paradigm without the need for explicit space modeling or complex sparsification procedures. Our proposed framework, called OPUS, utilizes a transformer encoder-decoder architecture to simultaneously predict occupied locations and classes using a set of learnable queries. Firstly, we employ the Chamfer distance loss to scale the set-to-set comparison problem to unprecedented magnitudes, making training such model end-to-end a reality. Subsequently, semantic classes are adaptively assigned using nearest neighbor search based on the learned locations. In addition, OPUS incorporates a suite of non-trivial strategies to enhance model performance, including coarse-to-fine learning, consistent point sampling, and adaptive re-weighting, etc. Finally, compared with current state-of-the-art methods, our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU. | OPUS: Occupancy Prediction Using a Sparse Set | [
"JiaBao Wang",
"Zhaojiang Liu",
"Qiang Meng",
"Liujiang Yan",
"Ke Wang",
"JIE YANG",
"Wei Liu",
"Qibin Hou",
"Ming-Ming Cheng"
] | NeurIPS.cc/2024/Conference | 2409.09350 | [
"https://github.com/jbwang1997/OPUS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZxtaNh5UYB | @inproceedings{
qiao2024learn,
title={Learn more, but bother less: parameter efficient continual learning},
author={Fuli Qiao and Mehrdad Mahdavi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZxtaNh5UYB}
} | Large Language Models (LLMs) have demonstrated profound capabilities due to their extensive pre-training on diverse corpora. However, LLMs often struggle with catastrophic forgetting when engaged in sequential task learning. In this paper, we propose a novel parameter-efficient approach for continual learning in LLMs, which empirically investigates knowledge transfer from previously learned tasks to new tasks through low-rank matrix parameters, enhancing the learning of new tasks without significant interference. Our method employs sensitivity-based analysis of low-rank matrix parameters to identify knowledge-specific parameters between sequential tasks, which are used to initialize the low-rank matrix parameters in new tasks. To maintain orthogonality and minimize forgetting, we further involve the gradient projection technique that keeps the low-rank subspaces of each new task orthogonal to those of previous tasks. Our experimental results on continual learning benchmarks validate the efficacy of our proposed method, which outperforms existing state-of-the-art methods in reducing forgetting, enhancing task performance, and preserving the model's ability to generalize to unseen tasks. | Learn more, but bother less: parameter efficient continual learning | [
"Fuli Qiao",
"Mehrdad Mahdavi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZxZOvVOiiL | @inproceedings{
agarwal2024mutliarmed,
title={Mutli-Armed Bandits with Network Interference},
author={Abhineet Agarwal and Anish Agarwal and Lorenzo Masoero and Justin Whitehouse},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZxZOvVOiiL}
} | Online experimentation with interference is a common challenge in modern applications such as e-commerce and adaptive clinical trials in medicine. For example, in online marketplaces, the revenue of a good depends on discounts applied to competing goods. Statistical inference with interference is widely studied in the offline setting, but far less is known about how to adaptively assign treatments to minimize regret. We address this gap by studying a multi-armed bandit (MAB) problem where a learner (e-commerce platform) sequentially assigns one of possible $\mathcal{A}$ actions (discounts) to $N$ units (goods) over $T$ rounds to minimize regret (maximize revenue). Unlike traditional MAB problems, the reward of each unit depends on the treatments assigned to other units, i.e., there is *interference* across the underlying network of units. With $\mathcal{A}$ actions and $N$ units, minimizing regret is combinatorially difficult since the action space grows as $\mathcal{A}^N$. To overcome this issue, we study a *sparse network interference* model, where the reward of a unit is only affected by the treatments assigned to $s$ neighboring units. We use tools from discrete Fourier analysis to develop a sparse linear representation of the unit-specific reward $r_n: [\mathcal{A}]^N \rightarrow \mathbb{R} $, and propose simple, linear regression-based algorithms to minimize regret. Importantly, our algorithms achieve provably low regret both when the learner observes the interference neighborhood for all units and when it is unknown. This significantly generalizes other works on this topic which impose strict conditions on the strength of interference on a *known* network, and also compare regret to a markedly weaker optimal action.
Empirically, we corroborate our theoretical findings via numerical simulations. | Mutli-Armed Bandits with Network Interference | [
"Abhineet Agarwal",
"Anish Agarwal",
"Lorenzo Masoero",
"Justin Whitehouse"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZxVrkm7Bjl | @inproceedings{
csord{\'a}s2024moeut,
title={Mo{EUT}: Mixture-of-Experts Universal Transformers},
author={R{\'o}bert Csord{\'a}s and Kazuki Irie and J{\"u}rgen Schmidhuber and Christopher Potts and Christopher D Manning},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZxVrkm7Bjl}
} | Previous work on Universal Transformers (UTs) has demonstrated the importance of parameter sharing across layers. By allowing recurrence in depth, UTs have advantages over standard Transformers in learning compositional generalizations, but layer-sharing comes with a practical limitation of parameter-compute ratio: it drastically reduces the parameter count compared to the non-shared model with the same dimensionality. Naively scaling up the layer size to compensate for the loss of parameters makes its computational resource requirements prohibitive. In practice, no previous work has succeeded in proposing a shared-layer Transformer design that is competitive in parameter count-dominated tasks such as language modeling. Here we propose MoEUT (pronounced "moot"), an effective mixture-of-experts (MoE)-based shared-layer Transformer architecture, which combines several recent advances in MoEs for both feedforward and attention layers of standard Transformers together with novel layer-normalization and grouping schemes that are specific and crucial to UTs. The resulting UT model, for the first time, slightly outperforms standard Transformers on language modeling tasks such as BLiMP and PIQA, while using significantly less compute and memory. | MoEUT: Mixture-of-Experts Universal Transformers | [
"Róbert Csordás",
"Kazuki Irie",
"Jürgen Schmidhuber",
"Christopher Potts",
"Christopher D Manning"
] | NeurIPS.cc/2024/Conference | 2405.16039 | [
"https://github.com/robertcsordas/moeut"
] | https://huggingface.co/papers/2405.16039 | 0 | 0 | 1 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=ZwiG9KjfHV | @inproceedings{
xu2024onebit,
title={OneBit: Towards Extremely Low-bit Large Language Models},
author={Yuzhuang Xu and Xu Han and Zonghan Yang and Shuo Wang and Qingfu Zhu and Zhiyuan Liu and Weidong Liu and Wanxiang Che},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZwiG9KjfHV}
} | Model quantification uses low bit-width values to represent the weight matrices of existing models to be quantized, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs. However, current quantization methods suffer severe performance degradation when the bit-width is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to quantize models. This paper boldly quantizes the weight matrices of LLMs to 1-bit, paving the way for the extremely low bit-width deployment of LLMs. For this target, we introduce a 1-bit model compressing framework named OneBit, including a novel 1-bit parameter representation method to better quantize LLMs as well as an effective parameter initialization method based on matrix decomposition to improve the convergence speed of the quantization framework. Sufficient experimental results indicate that OneBit achieves good performance (at least 81% of the non-quantized performance on LLaMA models) with robust training processes when only using 1-bit weight matrices. | OneBit: Towards Extremely Low-bit Large Language Models | [
"Yuzhuang Xu",
"Xu Han",
"Zonghan Yang",
"Shuo Wang",
"Qingfu Zhu",
"Zhiyuan Liu",
"Weidong Liu",
"Wanxiang Che"
] | NeurIPS.cc/2024/Conference | 2402.11295 | [
"https://github.com/xuyuzhuang11/onebit"
] | https://huggingface.co/papers/2402.11295 | 4 | 23 | 7 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=ZwS2y21mZV | @inproceedings{
jiang2024approximation,
title={Approximation Rate of the Transformer Architecture for Sequence Modeling},
author={Haotian Jiang and Qianxiao Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZwS2y21mZV}
} | The Transformer architecture is widely applied in sequence modeling applications, yet the theoretical understanding of its working principles remains limited. In this work, we investigate the approximation rate for single-layer Transformers with one head. We consider general non-linear relationships and identify a novel notion of complexity measures to establish an explicit Jackson-type approximation rate estimate for the Transformer. This rate reveals the structural properties of the Transformer and suggests the types of sequential relationships it is best suited for approximating. In particular, the results on approximation rates enable us to concretely analyze the differences between the Transformer and classical sequence modeling methods, such as recurrent neural networks. | Approximation Rate of the Transformer Architecture for Sequence Modeling | [
"Haotian Jiang",
"Qianxiao Li"
] | NeurIPS.cc/2024/Conference | 2305.18475 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZvQ4Bn75kN | @inproceedings{
xiao2024video,
title={Video Diffusion Models are Training-free Motion Interpreter and Controller},
author={Zeqi Xiao and Yifan Zhou and Shuai Yang and Xingang Pan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZvQ4Bn75kN}
} | Video generation primarily aims to model authentic and customized motion across frames, making understanding and controlling the motion a crucial topic. Most diffusion-based studies on video motion focus on motion customization with training-based paradigms, which, however, demands substantial training resources and necessitates retraining for diverse models. Crucially, these approaches do not explore how video diffusion models encode cross-frame motion information in their features, lacking interpretability and transparency in their effectiveness. To answer this question, this paper introduces a novel perspective to understand, localize, and manipulate motion-aware features in video diffusion models. Through analysis using Principal Component Analysis (PCA), our work discloses that robust motion-aware feature already exists in video diffusion models. We present a new MOtion FeaTure (MOFT) by eliminating content correlation information and filtering motion channels. MOFT provides a distinct set of benefits, including the ability to encode comprehensive motion information with clear interpretability, extraction without the need for training, and generalizability across diverse architectures. Leveraging MOFT, we propose a novel training-free video motion control framework. Our method demonstrates competitive performance in generating natural and faithful motion, providing architecture-agnostic insights and applicability in a variety of downstream tasks. | Video Diffusion Models are Training-free Motion Interpreter and Controller | [
"Zeqi Xiao",
"Yifan Zhou",
"Shuai Yang",
"Xingang Pan"
] | NeurIPS.cc/2024/Conference | 2405.14864 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZupoMzMNrO | @inproceedings{
ma2024learningtocache,
title={Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching},
author={Xinyin Ma and Gongfan Fang and Michael Bi Mi and Xinchao Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZupoMzMNrO}
} | Diffusion Transformers have recently demonstrated unprecedented generative capabilities for various tasks. The encouraging results, however, come with the cost of slow inference, since each denoising step requires inference on a transformer model with a large scale of parameters. In this study, we make an interesting and somehow surprising observation: the computation of a large proportion of layers in the diffusion transformer, through introducing a caching mechanism, can be readily removed even without updating the model parameters. In the case of U-ViT-H/2, for example, we may remove up to 93.68% of the computation in the cache steps (46.84% for all steps), with less than 0.01 drop in FID. To achieve this, we introduce a novel scheme, named Learning-to-Cache (L2C), that learns to conduct caching in a dynamic manner for diffusion transformers. Specifically, by leveraging the identical structure of layers in transformers and the sequential nature of diffusion, we explore redundant computations between timesteps by treating each layer as the fundamental unit for caching. To address the challenge of the exponential search space in deep models for identifying layers to cache and remove, we propose a novel differentiable optimization objective. An input-invariant yet timestep-variant router is then optimized, which can finally produce a static computation graph. Experimental results show that L2C largely outperforms samplers such as DDIM and DPM-Solver, alongside prior cache-based methods at the same inference speed. | Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching | [
"Xinyin Ma",
"Gongfan Fang",
"Michael Bi Mi",
"Xinchao Wang"
] | NeurIPS.cc/2024/Conference | 2406.01733 | [
"https://github.com/horseee/learning-to-cache"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZulWEWQOp9 | @inproceedings{
lin2024ctrlx,
title={Ctrl-X: Controlling Structure and Appearance for Text-To-Image Generation Without Guidance},
author={Kuan Heng Lin and Sicheng Mo and Ben Klingher and Fangzhou Mu and Bolei Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZulWEWQOp9}
} | Recent controllable generation approaches such as FreeControl and Diffusion Self-Guidance bring fine-grained spatial and appearance control to text-to-image (T2I) diffusion models without training auxiliary modules. However, these methods optimize the latent embedding for each type of score function with longer diffusion steps, making the generation process time-consuming and limiting their flexibility and use. This work presents *Ctrl-X*, a simple framework for T2I diffusion controlling structure and appearance without additional training or guidance. Ctrl-X designs feed-forward structure control to enable the structure alignment with a structure image and semantic-aware appearance transfer to facilitate the appearance transfer from a user-input image. Extensive qualitative and quantitative experiments illustrate the superior performance of Ctrl-X on various condition inputs and model checkpoints. In particular, Ctrl-X supports novel structure and appearance control with arbitrary condition images of any modality, exhibits superior image quality and appearance transfer compared to existing works, and provides instant plug-and-play functionality to any T2I and text-to-video (T2V) diffusion model. See our project page for the code and an overview of the results: https://genforce.github.io/ctrl-x | Ctrl-X: Controlling Structure and Appearance for Text-To-Image Generation Without Guidance | [
"Kuan Heng Lin",
"Sicheng Mo",
"Ben Klingher",
"Fangzhou Mu",
"Bolei Zhou"
] | NeurIPS.cc/2024/Conference | 2406.07540 | [
""
] | https://huggingface.co/papers/2406.07540 | 1 | 1 | 0 | 5 | [] | [] | [
"multimodalart/ctrl-x",
"multimodalart/multimodalart-ctrl-x"
] | [] | [] | [
"multimodalart/ctrl-x",
"multimodalart/multimodalart-ctrl-x"
] | 1 | poster |
null | https://openreview.net/forum?id=ZtTWKr51yH | @inproceedings{
simonetto2024constrained,
title={Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data},
author={Thibault Simonetto and Salah GHAMIZI and Maxime Cordy},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZtTWKr51yH}
} | State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings. However, the robustness of these models remains scarcely explored. Contrary to computer vision, there are no effective attacks to properly evaluate the adversarial robustness of deep tabular models due to intrinsic properties of tabular data, such as categorical features, immutability, and feature relationship constraints. To fill this gap, we first propose CAPGD, a gradient attack that overcomes the failures of existing gradient attacks with adaptive mechanisms. This new attack does not require parameter tuning and further degrades the accuracy, up to 81\% points compared to the previous gradient attacks. Second, we design CAA, an efficient evasion attack that combines our CAPGD attack and MOEVA, the best search-based attack. We demonstrate the effectiveness of our attacks on five architectures and four critical use cases. Our empirical study demonstrates that CAA outperforms all existing attacks in 17 over the 20 settings, and leads to a drop in the accuracy by up to 96.1\% points and 21.9\% points compared to CAPGD and MOEVA respectively while being up to five times faster than MOEVA. Given the effectiveness and efficiency of our new attacks, we argue that they should become the minimal test for any new defense or robust architectures in tabular machine learning. | Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data | [
"Thibault Simonetto",
"Salah GHAMIZI",
"Maxime Cordy"
] | NeurIPS.cc/2024/Conference | 2406.00775 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=ZtDARpmbun | @inproceedings{
shi2024prospective,
title={Prospective Representation Learning for Non-Exemplar Class-Incremental Learning},
author={Wuxuan Shi and Mang Ye},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZtDARpmbun}
} | Non-exemplar class-incremental learning (NECIL) is a challenging task that requires recognizing both old and new classes without retaining any old class samples. Current works mainly deal with the conflicts between old and new classes retrospectively as a new task comes in. However, the lack of old task data makes balancing old and new classes difficult. Instead, we propose a Prospective Representation Learning (PRL) approach to prepare the model for handling conflicts in advance. In the base phase, we squeeze the embedding distribution of the current classes to reserve space for forward compatibility with future classes. In the incremental phase, we make the new class features away from the saved prototypes of old classes in a latent space while aligning the current embedding space with the latent space when updating the model. Thereby, the new class features are clustered in the reserved space to minimize the shock of the new classes on the former classes. Our approach can help existing NECIL baselines to balance old and new classes in a plug-and-play manner. Extensive experiments on several benchmarks demonstrate that our approach outperforms the state-of-the-art methods. | Prospective Representation Learning for Non-Exemplar Class-Incremental Learning | [
"Wuxuan Shi",
"Mang Ye"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZsxZ65YqL1 | @inproceedings{
lan2024criticeval,
title={CriticEval: Evaluating Large-scale Language Model as Critic},
author={Tian Lan and Wenwei Zhang and Chen Xu and Heyan Huang and Dahua Lin and Kai Chen and Xian-Ling Mao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZsxZ65YqL1}
} | Critique ability, i.e., the capability of Large Language Models (LLMs) to identify and rectify flaws in responses, is crucial for their applications in self-improvement and scalable oversight. While numerous studies have been proposed to evaluate critique ability of LLMs, their comprehensiveness and reliability are still limited. To overcome this problem, we introduce CriticEval, a novel benchmark designed to comprehensively and reliably evaluate critique ability of LLMs. Specifically, to ensure the comprehensiveness, CriticEval evaluates critique ability from four dimensions across nine diverse task scenarios. It evaluates both scalar-valued and textual critiques, targeting responses of varying quality. To ensure the reliability, a large number of critiques are annotated to serve as references, enabling GPT-4 to evaluate textual critiques reliably. Extensive evaluations of open-source and closed-source LLMs first validate the reliability of evaluation in CriticEval. Then, experimental results demonstrate the promising potential of open-source LLMs, the effectiveness of critique datasets and several intriguing relationships between the critique ability and some critical factors, including task types, response qualities and critique dimensions. | CriticEval: Evaluating Large-scale Language Model as Critic | [
"Tian Lan",
"Wenwei Zhang",
"Chen Xu",
"Heyan Huang",
"Dahua Lin",
"Kai Chen",
"Xian-Ling Mao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZsS0megTsh | @inproceedings{
liang2024speechforensics,
title={SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection},
author={Yachao Liang and Min Yu and Gang Li and Jianguo Jiang and Boquan Li and Feng Yu and Ning Zhang and Xiang Meng and Weiqing Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZsS0megTsh}
} | Detection of face forgery videos remains a formidable challenge in the field of digital forensics, especially the generalization to unseen datasets and common perturbations. In this paper, we tackle this issue by leveraging the synergy between audio and visual speech elements, embarking on a novel approach through audio-visual speech representation learning. Our work is motivated by the finding that audio signals, enriched with speech content, can provide precise information effectively reflecting facial movements. To this end, we first learn precise audio-visual speech representations on real videos via a self-supervised masked prediction task, which encodes both local and global semantic information simultaneously. Then, the derived model is directly transferred to the forgery detection task. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of cross-dataset generalization and robustness, without the participation of any fake video in model training. | SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection | [
"Yachao Liang",
"Min Yu",
"Gang Li",
"Jianguo Jiang",
"Boquan Li",
"Feng Yu",
"Ning Zhang",
"Xiang Meng",
"Weiqing Huang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZpVTRQVX5b | @inproceedings{
le2024transvip,
title={Trans{VIP}: Speech to Speech Translation System with Voice and Isochrony Preservation},
author={Chenyang Le and Yao Qian and Dongmei Wang and Long Zhou and Shujie LIU and Xiaofei Wang and Midia Yousefi and Yanmin Qian and Jinyu Li and Michael Zeng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZpVTRQVX5b}
} | There is a rising interest and trend in research towards directly translating speech from one language to another, known as end-to-end speech-to-speech translation. However, most end-to-end models struggle to outperform cascade models, i.e., a pipeline framework by concatenating speech recognition, machine translation and text-to-speech models. The primary challenges stem from the inherent complexities involved in direct translation tasks and the scarcity of data. In this study, we introduce a novel model framework TransVIP that leverages diverse datasets in a cascade fashion yet facilitates end-to-end inference through joint probability. Furthermore, we propose two separated encoders to preserve the speaker’s voice characteristics and isochrony from the source speech during the translation process, making it highly suitable for scenarios such as video dubbing. Our experiments on the French-English language pair demonstrate that our model outperforms the current state-of-the-art speech-to-speech translation model. | TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation | [
"Chenyang Le",
"Yao Qian",
"Dongmei Wang",
"Long Zhou",
"Shujie LIU",
"Xiaofei Wang",
"Midia Yousefi",
"Yanmin Qian",
"Jinyu Li",
"Michael Zeng"
] | NeurIPS.cc/2024/Conference | 2405.17809 | [
"https://github.com/nethermanpro/transvip"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZoarR5QmFX | @inproceedings{
li2024concentrate,
title={Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models},
author={Chengzhengxu Li and Xiaoming Liu and Zhaohan Zhang and Yichen Wang and Chen Liu and Yu Lan and Chao Shen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZoarR5QmFX}
} | Recent advances in prompt optimization have notably enhanced the performance of pre-trained language models (PLMs) on downstream tasks. However, the potential of optimized prompts on domain generalization has been under-explored. To explore the nature of prompt generalization on unknown domains, we conduct pilot experiments and find that (i) Prompts gaining more attention weight from PLMs’ deep layers are more generalizable and (ii) Prompts with more stable attention distributions in PLMs’ deep layers are more generalizable. Thus, we offer a fresh objective towards domain-generalizable prompts optimization named ''Concentration'', which represents the ''lookback'' attention from the current decoding token to the prompt tokens, to increase the attention strength on prompts and reduce the fluctuation of attention distribution.
We adapt this new objective to popular soft prompt and hard prompt optimization methods, respectively. Extensive experiments demonstrate that our idea improves comparison prompt optimization methods by 1.42% for soft prompt generalization and 2.16% for hard prompt generalization in accuracy on the multi-source domain generalization setting, while maintaining satisfying in-domain performance. The promising results validate the effectiveness of our proposed prompt optimization objective and provide key insights into domain-generalizable prompts. | Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models | [
"Chengzhengxu Li",
"Xiaoming Liu",
"Zhaohan Zhang",
"Yichen Wang",
"Chen Liu",
"Yu Lan",
"Chao Shen"
] | NeurIPS.cc/2024/Conference | 2406.10584 | [
"https://github.com/czx-li/Concentrate-Attention"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZmIAd3JaZN | @inproceedings{
zhu2024truthful,
title={Truthful High Dimensional Sparse Linear Regression},
author={Liyang Zhu and Amina Manseur and Meng Ding and Jinyan Liu and Jinhui Xu and Di Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZmIAd3JaZN}
} | We study the problem of fitting the high dimensional sparse linear regression model, where the data are provided by strategic or self-interested agents (individuals) who prioritize their privacy of data disclosure. In contrast to the classical setting, our focus is on designing mechanisms that can effectively incentivize most agents to truthfully report their data while preserving the privacy of individual reports. Simultaneously, we seek an estimator which should be close to the underlying parameter.
We attempt to solve the problem by deriving a novel private estimator that has a closed-form expression.
Based on the estimator, we propose a mechanism which has the following properties via some appropriate design of the computation and payment scheme: (1) the mechanism is $(o(1), O(n^{-\Omega({1})}))$-jointly differentially private, where $n$ is the number of agents; (2) it is an $o(\frac{1}{n})$-approximate Bayes Nash equilibrium for a $(1-o(1))$-fraction of agents to truthfully report their data; (3) the output could achieve an error of $o(1)$ to the underlying parameter; (4) it is individually rational for a $(1-o(1))$ fraction of agents in the mechanism; (5) the payment budget required from the analyst to run the mechanism is $o(1)$. To the best of our knowledge, this is the first study on designing truthful (and privacy-preserving) mechanisms for high dimensional sparse linear regression. | Truthful High Dimensional Sparse Linear Regression | [
"Liyang Zhu",
"Amina Manseur",
"Meng Ding",
"Jinyan Liu",
"Jinhui Xu",
"Di Wang"
] | NeurIPS.cc/2024/Conference | 2410.13046 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZlpJLQsr2v | @inproceedings{
guo2024generalizable,
title={Generalizable Implicit Motion Modeling for Video Frame Interpolation},
author={Zujin Guo and Wei Li and Chen Change Loy},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZlpJLQsr2v}
} | Motion modeling is critical in flow-based Video Frame Interpolation (VFI). Existing paradigms either consider linear combinations of bidirectional flows or directly predict bilateral flows for given timestamps without exploring favorable motion priors, thus lacking the capability of effectively modeling spatiotemporal dynamics in real-world videos. To address this limitation, in this study, we introduce Generalizable Implicit Motion Modeling (GIMM), a novel and effective approach to motion modeling for VFI. Specifically, to enable GIMM as an effective motion modeling paradigm, we design a motion encoding pipeline to model spatiotemporal motion latent from bidirectional flows extracted from pre-trained flow estimators, effectively representing input-specific motion priors. Then, we implicitly predict arbitrary-timestep optical flows within two adjacent input frames via an adaptive coordinate-based neural network, with spatiotemporal coordinates and motion latent as inputs. Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion. We show that GIMM performs better than the current state of the art on standard VFI benchmarks. | Generalizable Implicit Motion Modeling for Video Frame Interpolation | [
"Zujin Guo",
"Wei Li",
"Chen Change Loy"
] | NeurIPS.cc/2024/Conference | 2407.08680 | [
""
] | https://huggingface.co/papers/2407.08680 | 1 | 9 | 2 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=ZjgcYMkCmX | @inproceedings{
lazzati2024how,
title={How does Inverse {RL} Scale to Large State Spaces? A Provably Efficient Approach},
author={Filippo Lazzati and Mirco Mutti and Alberto Maria Metelli},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZjgcYMkCmX}
} | In online Inverse Reinforcement Learning (IRL), the learner can collect samples about the dynamics of the environment to improve its
estimate of the reward function. Since IRL suffers from identifiability issues, many theoretical works on online IRL focus on estimating the entire set of rewards that explain the demonstrations, named the *feasible reward set*. However, none of the algorithms available in literature can scale to problems with large state spaces. In this paper, we focus on the online IRL problem in Linear Markov Decision
Processes (MDPs). We show that the structure offered by Linear MDPs is not sufficient for efficiently estimating the feasible set when the state space is large. As a consequence, we introduce the novel framework of *rewards compatibility*, which generalizes the notion of feasible set, and we develop CATY-IRL, a sample efficient algorithm whose complexity is independent of the size of the state space in Linear MDPs. When restricted to the tabular setting, we demonstrate that CATY-IRL is minimax optimal up to logarithmic factors. As a by-product, we show that Reward-Free Exploration (RFE) enjoys the same worst-case rate, improving over the state-of-the-art lower bound. Finally, we devise a unifying framework for IRL and RFE that may be of independent interest. | How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach | [
"Filippo Lazzati",
"Mirco Mutti",
"Alberto Maria Metelli"
] | NeurIPS.cc/2024/Conference | 2406.03812 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZizwgYErtQ | @inproceedings{
liu2024contextual,
title={Contextual Active Model Selection},
author={Xuefeng Liu and Fangfang Xia and Rick L. Stevens and Yuxin Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZizwgYErtQ}
} | While training models and labeling data are resource-intensive, a wealth of pre-trained models and unlabeled data exists. To effectively utilize these resources, we present an approach to actively select pre-trained models while minimizing labeling costs. We frame this as an online contextual active model selection problem: At each round, the learner receives an unlabeled data point as a context. The objective is to adaptively select the best model to make a prediction while limiting label requests. To tackle this problem, we propose CAMS, a contextual active model selection algorithm that relies on two novel components: (1) a contextual model selection mechanism, which leverages context information to make informed decisions about which model is likely to perform best for a given context, and (2)
an active query component, which strategically chooses when to request labels for data points, minimizing the overall labeling cost. We provide rigorous theoretical analysis for the regret and query complexity under both adversarial and stochastic settings. Furthermore, we demonstrate the effectiveness of our algorithm on a diverse collection of benchmark classification tasks. Notably, CAMS requires substantially less labeling effort (less than 10%) compared to existing methods on CIFAR10 and DRIFT benchmarks, while achieving similar or better accuracy. | Contextual Active Model Selection | [
"Xuefeng Liu",
"Fangfang Xia",
"Rick L. Stevens",
"Yuxin Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZgtLQQR1K7 | @inproceedings{
liu2024vmamba,
title={{VM}amba: Visual State Space Model},
author={Yue Liu and Yunjie Tian and Yuzhong Zhao and Hongtian Yu and Lingxi Xie and Yaowei Wang and Qixiang Ye and Jianbin Jiao and Yunfan Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZgtLQQR1K7}
} | Designing computationally efficient network architectures remains an ongoing necessity in computer vision. In this paper, we adapt Mamba, a state-space language model, into VMamba, a vision backbone with linear time complexity. At the core of VMamba is a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module. By traversing along four scanning routes, SS2D bridges the gap between the ordered nature of 1D selective scan and the non-sequential structure of 2D vision data, which facilitates the collection of contextual information from various sources and perspectives. Based on the VSS blocks, we develop a family of VMamba architectures and accelerate them through a succession of architectural and implementation enhancements. Extensive experiments demonstrate VMamba’s
promising performance across diverse visual perception tasks, highlighting its superior input scaling efficiency compared to existing benchmark models. Source code is available at https://github.com/MzeroMiko/VMamba | VMamba: Visual State Space Model | [
"Yue Liu",
"Yunjie Tian",
"Yuzhong Zhao",
"Hongtian Yu",
"Lingxi Xie",
"Yaowei Wang",
"Qixiang Ye",
"Jianbin Jiao",
"Yunfan Liu"
] | NeurIPS.cc/2024/Conference | 2401.10166 | [
"https://github.com/mzeromiko/vmamba"
] | https://huggingface.co/papers/2401.10166 | 2 | 38 | 2 | 8 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=ZgDNrpS46k | @inproceedings{
kosson2024analyzing,
title={Analyzing \& Reducing the Need for Learning Rate Warmup in {GPT} Training},
author={Atli Kosson and Bettina Messmer and Martin Jaggi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZgDNrpS46k}
} | Learning Rate Warmup is a popular heuristic for training neural networks, especially at larger batch sizes, despite limited understanding of its benefits. Warmup decreases the update size $\Delta \mathbf{w}_t = \eta_t \mathbf{u}_t$ early in training by using lower values for the learning rate $\eta_t$. In this work we argue that warmup benefits training by keeping the overall size of $\Delta \mathbf{w}_t$ limited, counteracting large initial values of $\mathbf{u}_t$. Focusing on small-scale GPT training with AdamW/Lion, we explore the following question: *Why and by which criteria are early updates $\mathbf{u}_t$ too large?* We analyze different metrics for the update size including the $\ell_2$-norm, resulting directional change, and impact on the representations of the network, providing a new perspective on warmup. In particular, we find that warmup helps counteract large angular updates as well as a limited critical batch size early in training. Finally, we show that the need for warmup can be significantly reduced or eliminated by modifying the optimizer to explicitly normalize $\mathbf{u}_t$ based on the aforementioned metrics. | Analyzing Reducing the Need for Learning Rate Warmup in GPT Training | [
"Atli Kosson",
"Bettina Messmer",
"Martin Jaggi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Zg4zs0l2iH | @inproceedings{
nguyen2024cyclo,
title={{CYCLO}: Cyclic Graph Transformer Approach to Multi-Object Relationship Modeling in Aerial Videos},
author={Trong-Thuan Nguyen and Pha Nguyen and Xin Li and Jackson Cothren and Alper Yilmaz and Khoa Luu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Zg4zs0l2iH}
} | Video scene graph generation (VidSGG) has emerged as a transformative approach to capturing and interpreting the intricate relationships among objects and their temporal dynamics in video sequences. In this paper, we introduce the new AeroEye dataset that focuses on multi-object relationship modeling in aerial videos. Our AeroEye dataset features various drone scenes and includes a visually comprehensive and precise collection of predicates that capture the intricate relationships and spatial arrangements among objects. To this end, we propose the novel Cyclic Graph Transformer (CYCLO) approach that allows the model to capture both direct and long-range temporal dependencies by continuously updating the history of interactions in a circular manner. The proposed approach also allows one to handle sequences with inherent cyclical patterns and process object relationships in the correct sequential order. Therefore, it can effectively capture periodic and overlapping relationships while minimizing information loss. The extensive experiments on the AeroEye dataset demonstrate the effectiveness of the proposed CYCLO model, demonstrating its potential to perform scene understanding on drone videos. Finally, the CYCLO method consistently achieves State-of-the-Art (SOTA) results on two in-the-wild scene graph generation benchmarks, i.e., PVSG and ASPIRe. | CYCLO: Cyclic Graph Transformer Approach to Multi-Object Relationship Modeling in Aerial Videos | [
"Trong-Thuan Nguyen",
"Pha Nguyen",
"Xin Li",
"Jackson Cothren",
"Alper Yilmaz",
"Khoa Luu"
] | NeurIPS.cc/2024/Conference | 2406.01029 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZfXRAqbBKX | @inproceedings{
shi2024ircan,
title={{IRCAN}: Mitigating Knowledge Conflicts in {LLM} Generation via Identifying and Reweighting Context-Aware Neurons},
author={Dan Shi and Renren Jin and Tianhao Shen and Weilong Dong and Xinwei Wu and Deyi Xiong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZfXRAqbBKX}
} | It is widely acknowledged that large language models (LLMs) encode a vast reservoir of knowledge after being trained on mass data. Recent studies disclose knowledge conflicts in LLM generation, wherein outdated or incorrect parametric knowledge (i.e., encoded knowledge) contradicts new knowledge provided in the context. To mitigate such knowledge conflicts, we propose a novel framework, IRCAN (Identifying and Reweighting Context-Aware Neurons) to capitalize on neurons that are crucial in processing contextual cues. Specifically, IRCAN first identifies neurons that significantly contribute to context processing, utilizing a context-aware attribution score derived from integrated gradients. Subsequently, the identified context-aware neurons are strengthened via reweighting. In doing so, we steer LLMs to generate context-sensitive outputs with respect to the new knowledge provided in the context. Extensive experiments conducted across a variety of models and tasks demonstrate that IRCAN not only achieves remarkable improvements in handling knowledge conflicts but also offers a scalable, plug-and-play solution that can be integrated seamlessly with existing models. Our codes are released at https://github.com/danshi777/IRCAN. | IRCAN: Mitigating Knowledge Conflicts in LLM Generation via Identifying and Reweighting Context-Aware Neurons | [
"Dan Shi",
"Renren Jin",
"Tianhao Shen",
"Weilong Dong",
"Xinwei Wu",
"Deyi Xiong"
] | NeurIPS.cc/2024/Conference | 2406.18406 | [
"https://github.com/danshi777/ircan"
] | https://huggingface.co/papers/2406.18406 | 1 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=ZfRGRK5Kxl | @inproceedings{
patel2024tripletclip,
title={Triplet{CLIP}: Improving Compositional Reasoning of {CLIP} via Synthetic Vision-Language Negatives},
author={Maitreya Patel and Naga Sai Abhiram kusumba and Sheng Cheng and Changhoon Kim and Tejas Gokhale and Chitta Baral and Yezhou Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZfRGRK5Kxl}
} | Contrastive Language-Image Pretraining (CLIP) models maximize the mutual information between text and visual modalities to learn representations. This makes the nature of the training data a significant factor in the efficacy of CLIP for downstream tasks. However, the lack of compositional diversity in contemporary image-text datasets limits the compositional reasoning ability of CLIP. We show that generating ``hard'' negative captions via in-context learning and synthesizing corresponding negative images with text-to-image generators offers a solution. We introduce a novel contrastive pre-training strategy that leverages these hard negative captions and images in an alternating fashion to train CLIP. We demonstrate that our method, named TripletCLIP, when applied to existing datasets such as CC3M and CC12M, enhances the compositional capabilities of CLIP, resulting in an absolute improvement of over 9% on the SugarCrepe benchmark on an equal computational budget, as well as improvements in zero-shot image classification and image retrieval. Our code, models, and data are available at: tripletclip.github.io. | TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives | [
"Maitreya Patel",
"Naga Sai Abhiram kusumba",
"Sheng Cheng",
"Changhoon Kim",
"Tejas Gokhale",
"Chitta Baral",
"Yezhou Yang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZfBuhzE556 | @inproceedings{
wu2024betadpo,
title={\${\textbackslash}beta\$-{DPO}: Direct Preference Optimization with Dynamic \${\textbackslash}beta\$},
author={Junkang Wu and Yuexiang Xie and Zhengyi Yang and Jiancan Wu and Jinyang Gao and Bolin Ding and Xiang Wang and Xiangnan He},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZfBuhzE556}
} | Direct Preference Optimization (DPO) has emerged as a compelling approach for training Large Language Models (LLMs) to adhere to human preferences. However, the performance of DPO is sensitive to the fine-tuning of its trade-off parameter $\beta$, as well as to the quality of the preference data. We analyze the impact of $\beta$ and data quality on DPO, uncovering that optimal $\beta$ values vary with the informativeness of pairwise data. Addressing the limitations of static $\beta$ values, we introduce a novel framework that dynamically calibrates $\beta$ at the batch level, informed by data quality considerations. Additionally, our method incorporates $\beta$-guided data filtering to safeguard against the influence of outliers. Through empirical evaluation, we demonstrate that our dynamic $\beta$ adjustment technique significantly improves DPO’s performance across a range of models and datasets, offering a more robust and adaptable training paradigm for aligning LLMs with human feedback. The code is available at \url{https://anonymous.4open.science/r/beta-DPO-EE6C}. | β-DPO: Direct Preference Optimization with Dynamic β | [
"Junkang Wu",
"Yuexiang Xie",
"Zhengyi Yang",
"Jiancan Wu",
"Jinyang Gao",
"Bolin Ding",
"Xiang Wang",
"Xiangnan He"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/junkangwu/beta-dpo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZeihWodDVh | @inproceedings{
pooladzandi2024puregen,
title={PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics},
author={Omead Pooladzandi and Sunay Gajanan Bhat and Jeffrey Jiang and Alexander Branch and Gregory Pottie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZeihWodDVh}
} | Train-time data poisoning attacks threaten machine learning models by introducing adversarial examples during training, leading to misclassification. Current defense methods often reduce generalization performance, are attack-specific, and impose significant training overhead. To address this, we introduce a set of universal data purification methods using a stochastic transform, $\Psi(x)$, realized via iterative Langevin dynamics of Energy-Based Models (EBMs), Denoising Diffusion Probabilistic Models (DDPMs), or both. These approaches purify poisoned data with minimal impact on classifier generalization. Our specially trained EBMs and DDPMs provide state-of-the-art defense against various attacks (including Narcissus, Bullseye Polytope, Gradient Matching) on CIFAR-10, Tiny-ImageNet, and CINIC-10, without needing attack or classifier-specific information. We discuss performance trade-offs and show that our methods remain highly effective even with poisoned or distributionally shifted generative model training data. | PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics | [
"Omead Pooladzandi",
"Sunay Gajanan Bhat",
"Jeffrey Jiang",
"Alexander Branch",
"Gregory Pottie"
] | NeurIPS.cc/2024/Conference | 2405.18627 | [
"https://github.com/SunayBhat1/PureGen_PoisonDefense"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZehccYKkNH | @inproceedings{
arnal2024wasserstein,
title={Wasserstein convergence of Cech persistence diagrams for samplings of submanifolds},
author={Charles Arnal and David Cohen-Steiner and Vincent Divol},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ZehccYKkNH}
} | Cech Persistence diagrams (PDs) are topological descriptors routinely used to capture the geometry of complex datasets. They are commonly compared using the Wasserstein distances $\mathrm{OT}_p$; however, the extent to which PDs are stable with respect to these metrics remains poorly understood.
We partially close this gap by focusing on the case where datasets are sampled on an $m$-dimensional submanifold of $\mathbb{R}^d$. Under this manifold hypothesis, we show that convergence with respect to the $\mathrm{OT}_p$ metric happens exactly when $p>m$. We also provide improvements upon the bottleneck stability theorem in this case and prove new laws of large numbers for the total $\alpha$-persistence of PDs. Finally, we show how these theoretical findings shed new light on the behavior of the feature maps on the space of PDs that are used in ML-oriented applications of Topological Data Analysis. | Wasserstein convergence of Cech persistence diagrams for samplings of submanifolds | [
"Charles Arnal",
"David Cohen-Steiner",
"Vincent Divol"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.