bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=S8wFXyT4dY | @inproceedings{
song2024pplns,
title={{PPLN}s: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond},
author={Chen Song and Zhenxiao Liang and Bo Sun and Qixing Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S8wFXyT4dY}
} | We present Parametric Piecewise Linear Networks (PPLNs) for temporal vision inference. Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs are ideal for processing data captured by event cameras, which are built to simulate neural activities in the human retina. We discuss how to represent the membrane potential of an artificial neuron by a parametric piecewise linear function with learnable coefficients. This design echoes the idea of building deep models from learnable parametric functions recently popularized by Kolmogorov–Arnold Networks (KANs). Experiments demonstrate the state-of-the-art performance of PPLNs in event-based and image-based vision applications, including steering prediction, human pose estimation, and motion deblurring. | PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond | [
"Chen Song",
"Zhenxiao Liang",
"Bo Sun",
"Qixing Huang"
] | NeurIPS.cc/2024/Conference | 2409.19772 | [
"https://github.com/chensong1995/ppln"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=S8SEjerTTg | @inproceedings{
li2024cloud,
title={Cloud Object Detector Adaptation by Integrating Different Source Knowledge},
author={Shuaifeng Li and Mao Ye and Lihua Zhou and Nianxin Li and Siying Xiao and Song Tang and Xiatian Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S8SEjerTTg}
} | We propose to explore an interesting and promising problem, Cloud Object Detector Adaptation (CODA), where the target domain leverages detections provided by a large cloud model to build a target detector. Despite with powerful generalization capability, the cloud model still cannot achieve error-free detection in a specific target domain. In this work, we present a novel Cloud Object detector adaptation method by Integrating different source kNowledge (COIN). The key idea is to incorporate a public vision-language model (CLIP) to distill positive knowledge while refining negative knowledge for adaptation by self-promotion gradient direction alignment. To that end, knowledge dissemination, separation, and distillation are carried out successively. Knowledge dissemination combines knowledge from cloud detector and CLIP model to initialize a target detector and a CLIP detector in target domain. By matching CLIP detector with the cloud detector, knowledge separation categorizes detections into three parts: consistent, inconsistent and private detections such that divide-and-conquer strategy can be used for knowledge distillation. Consistent and private detections are directly used to train target detector; while inconsistent detections are fused based on a consistent knowledge generation network, which is trained by aligning the gradient direction of inconsistent detections to that of consistent detections, because it provides a direction toward an optimal target detector. Experiment results demonstrate that the proposed COIN method achieves the state-of-the-art performance. | Cloud Object Detector Adaptation by Integrating Different Source Knowledge | [
"Shuaifeng Li",
"Mao Ye",
"Lihua Zhou",
"Nianxin Li",
"Siying Xiao",
"Song Tang",
"Xiatian Zhu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=S7THlpvH8i | @inproceedings{
gray2024normalization,
title={Normalization Layer Per-Example Gradients are Sufficient to Predict Gradient Noise Scale in Transformers},
author={Gavia Gray and Aman Tiwari and Shane Bergsma and Joel Hestness},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S7THlpvH8i}
} | Per-example gradient norms are a vital ingredient for estimating gradient noise scale (GNS) with minimal variance. Observing the tensor contractions required to compute them, we propose a method with minimal FLOPs in 3D or greater tensor regimes by simultaneously computing the norms while computing the parameter gradients. Using this method we are able to observe the GNS of different layers at higher accuracy than previously possible. We find that the total GNS of contemporary transformer models is predicted well by the GNS of only the normalization layers. As a result, focusing only on the normalization layer, we develop a custom kernel to compute the per-example gradient norms while performing the LayerNorm backward pass with zero throughput overhead. Tracking GNS on only those layers, we are able to guide a practical batch size schedule that reduces training time by 18% on a Chinchilla-optimal language model. | Normalization Layer Per-Example Gradients are Sufficient to Predict Gradient Noise Scale in Transformers | [
"Gavia Gray",
"Aman Tiwari",
"Shane Bergsma",
"Joel Hestness"
] | NeurIPS.cc/2024/Conference | 2411.00999 | [
"https://github.com/cerebrasresearch/nanogns"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=S6YLeBMoWF | @inproceedings{
huang2024a,
title={A versatile informative diffusion model for single-cell {ATAC}-seq data generation and analysis},
author={Lei huang and Lei Xiong and Na Sun and Zunpeng Liu and Ka-Chun Wong and Manolis Kellis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S6YLeBMoWF}
} | The rapid advancement of single-cell ATAC sequencing (scATAC-seq) technologies holds great promise for investigating the heterogeneity of epigenetic landscapes at the cellular level. The amplification process in scATAC-seq experiments often introduces noise due to dropout events, which results in extreme sparsity that hinders accurate analysis. Consequently, there is a significant demand for the generation of high-quality scATAC-seq data in silico. Furthermore, current methodologies are typically task-specific, lacking a versatile framework capable of handling multiple tasks within a single model. In this work, we propose ATAC-Diff, a versatile framework, which is based on a diffusion model conditioned on the latent auxiliary variables to adapt for various tasks. ATAC-Diff is the first diffusion model for the scATAC-seq data generation and analysis, composed of auxiliary modules encoding the latent high-level variables to enable the model to learn the semantic information to sample high-quality data. Gaussian Mixture Model (GMM) as the latent prior and auxiliary decoder, the yield variables reserve the refined genomic information beneficial for downstream analyses. Another innovation is the incorporation of mutual information between observed and hidden variables as a regularization term to prevent the model from decoupling from latent variables. Through extensive experiments, we demonstrate that ATAC-Diff achieves high performance in both generation and analysis tasks, outperforming state-of-the-art models. | A versatile informative diffusion model for single-cell ATAC-seq data generation and analysis | [
"Lei huang",
"Lei Xiong",
"Na Sun",
"Zunpeng Liu",
"Ka-Chun Wong",
"Manolis Kellis"
] | NeurIPS.cc/2024/Conference | 2408.14801 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=S5coB5kqSD | @inproceedings{
yuzhe2024vexkd,
title={Ve{XKD}: The Versatile Integration of Cross-Modal Fusion and Knowledge Distillation for 3D Perception},
author={JI Yuzhe and Yijie CHEN and Liuqing Yang and Rui Ding and Meng Yang and Xinhu Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S5coB5kqSD}
} | Recent advancements in 3D perception have led to a proliferation of network architectures, particularly those involving multi-modal fusion algorithms. While these fusion algorithms improve accuracy, their complexity often impedes real-time performance. This paper introduces VeXKD, an effective and Versatile framework that integrates Cross-Modal Fusion with Knowledge Distillation. VeXKD applies knowledge distillation exclusively to the Bird's Eye View (BEV) feature maps, enabling the transfer of cross-modal insights to single-modal students without additional inference time overhead. It avoids volatile components that can vary across various 3D perception tasks and student modalities, thus improving versatility. The framework adopts a modality-general cross-modal fusion module to bridge the modality gap between the multi-modal teachers and single-modal students. Furthermore, leveraging byproducts generated during fusion, our BEV query guided mask generation network identifies crucial spatial locations across different BEV feature maps in a data-driven manner, significantly enhancing the effectiveness of knowledge distillation. Extensive experiments on the nuScenes dataset demonstrate notable improvements, with up to 6.9\%/4.2\% increase in mAP and NDS for 3D detection tasks and up to 4.3\% rise in mIoU for BEV map segmentation tasks, narrowing the performance gap with multi-modal models. | VeXKD: The Versatile Integration of Cross-Modal Fusion and Knowledge Distillation for 3D Perception | [
"JI Yuzhe",
"Yijie CHEN",
"Liuqing Yang",
"Rui Ding",
"Meng Yang",
"Xinhu Zheng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=S4ZqnMywcM | @inproceedings{
fan2024spatiotemporal,
title={Spatio-Temporal Interactive Learning for Efficient Image Reconstruction of Spiking Cameras},
author={Bin Fan and Jiaoyang Yin and Yuchao Dai and Chao Xu and Tiejun Huang and Boxin Shi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S4ZqnMywcM}
} | The spiking camera is an emerging neuromorphic vision sensor that records high-speed motion scenes by asynchronously firing continuous binary spike streams. Prevailing image reconstruction methods, generating intermediate frames from these spike streams, often rely on complex step-by-step network architectures that overlook the intrinsic collaboration of spatio-temporal complementary information. In this paper, we propose an efficient spatio-temporal interactive reconstruction network to jointly perform inter-frame feature alignment and intra-frame feature filtering in a coarse-to-fine manner. Specifically, it starts by extracting hierarchical features from a concise hybrid spike representation, then refines the motion fields and target frames scale-by-scale, ultimately obtaining a full-resolution output. Meanwhile, we introduce a symmetric interactive attention block and a multi-motion field estimation block to further enhance the interaction capability of the overall network. Experiments on synthetic and real-captured data show that our approach exhibits excellent performance while maintaining low model complexity. | Spatio-Temporal Interactive Learning for Efficient Image Reconstruction of Spiking Cameras | [
"Bin Fan",
"Jiaoyang Yin",
"Yuchao Dai",
"Chao Xu",
"Tiejun Huang",
"Boxin Shi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=S4YRCLbUK1 | @inproceedings{
saxon2024who,
title={Who Evaluates the Evaluations? Objectively Scoring Text-to-Image Prompt Coherence Metrics with T2{IS}coreScore ({TS}2)},
author={Michael Saxon and Fatima Jahara and Mahsa Khoshnoodi and Yujie Lu and Aditya Sharma and William Yang Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S4YRCLbUK1}
} | With advances in the quality of text-to-image (T2I) models has come interest in benchmarking their prompt faithfulness---the semantic coherence of generated images to the prompts they were conditioned on. A variety of T2I faithfulness metrics have been proposed, leveraging advances in cross-modal embeddings and vision-language models (VLMs). However, these metrics are not rigorously compared and benchmarked, instead presented with correlation to human Likert scores over a set of easy-to-discriminate images against seemingly weak baselines.
We introduce T2IScoreScore, a curated set of semantic error graphs containing a prompt and a set of increasingly erroneous images. These allow us to rigorously judge whether a given prompt faithfulness metric can correctly order images with respect to their objective error count and significantly discriminate between different error nodes, using meta-metric scores derived from established statistical tests. Surprisingly, we find that the state-of-the-art VLM-based metrics (e.g., TIFA, DSG, LLMScore, VIEScore) we tested fail to significantly outperform simple (and supposedly worse) feature-based metrics like CLIPScore, particularly on a hard subset of naturally-occurring T2I model errors. TS2 will enable the development of better T2I prompt faithfulness metrics through more rigorous comparison of their conformity to expected orderings and separations under objective criteria. | Who Evaluates the Evaluations? Objectively Scoring Text-to-Image Prompt Coherence Metrics with T2IScoreScore (TS2) | [
"Michael Saxon",
"Fatima Jahara",
"Mahsa Khoshnoodi",
"Yujie Lu",
"Aditya Sharma",
"William Yang Wang"
] | NeurIPS.cc/2024/Conference | 2404.04251 | [
"https://github.com/michaelsaxon/T2IScoreScore"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=S3HvA808gk | @inproceedings{
mcdermott2024a,
title={A Closer Look at {AUROC} and {AUPRC} under Class Imbalance},
author={Matthew B.A. McDermott and Haoran Zhang and Lasse Hyldig Hansen and Giovanni Angelotti and Jack Gallifant},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S3HvA808gk}
} | In machine learning (ML), a widespread claim is that the area under the precision-recall curve (AUPRC) is a superior metric for model comparison to the area under the receiver operating characteristic (AUROC) for tasks with class imbalance. This paper refutes this notion on two fronts. First, we theoretically characterize the behavior of AUROC and AUPRC in the presence of model mistakes, establishing clearly that AUPRC is not generally superior in cases of class imbalance. We further show that AUPRC can be a harmful metric as it can unduly favor model improvements in subpopulations with more frequent positive labels, heightening algorithmic disparities. Next, we empirically support our theory using experiments on both semi-synthetic and real-world fairness datasets. Prompted by these insights, we conduct a review of over 1.5 million scientific papers to understand the origin of this invalid claim, finding that it is often made without citation, misattributed to papers that do not argue this point, and aggressively over-generalized from source arguments. Our findings represent a dual contribution: a significant technical advancement in understanding the relationship between AUROC and AUPRC and a stark warning about unchecked assumptions in the ML community. | A Closer Look at AUROC and AUPRC under Class Imbalance | [
"Matthew B.A. McDermott",
"Haoran Zhang",
"Lasse Hyldig Hansen",
"Giovanni Angelotti",
"Jack Gallifant"
] | NeurIPS.cc/2024/Conference | 2401.06091 | [
"https://github.com/mmcdermott/auc_is_all_you_need"
] | https://huggingface.co/papers/2401.06091 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=S2P6KPLtm8 | @inproceedings{
xie2024identification,
title={Identification and Estimation of the Bi-Directional {MR} with Some Invalid Instruments},
author={Feng Xie and Zhen Yao and Lin Xie and Yan Zeng and Zhi Geng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S2P6KPLtm8}
} | We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist.
To address this problem, most existing methods attempt to find proper valid instrumental variables (IVs) for the target causal effect by expert knowledge or by assuming that the causal model is a one-directional MR model.
As such, in this paper, we first theoretically investigate the identification of the bi-directional MR from observational data. In particular, we provide necessary and sufficient conditions under which valid IV sets are correctly identified such that the bi-directional MR model is identifiable, including the causal directions of a pair of phenotypes (i.e., the treatment and outcome).
Moreover, based on the identification theory, we develop a cluster fusion-like method to discover valid IV sets and estimate the causal effects of interest.
We theoretically demonstrate the correctness of the proposed algorithm.
Experimental results show the effectiveness of our method for estimating causal effects in both one-directional and bi-directional MR models. | Identification and Estimation of the Bi-Directional MR with Some Invalid Instruments | [
"Feng Xie",
"Zhen Yao",
"Lin Xie",
"Yan Zeng",
"Zhi Geng"
] | NeurIPS.cc/2024/Conference | 2407.07933 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=S1fc92uemC | @inproceedings{
yu2024rankrag,
title={Rank{RAG}: Unifying Context Ranking with Retrieval-Augmented Generation in {LLM}s},
author={Yue Yu and Wei Ping and Zihan Liu and Boxin Wang and Jiaxuan You and Chao Zhang and Mohammad Shoeybi and Bryan Catanzaro},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S1fc92uemC}
} | Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-augmented generation (RAG). In this work, we propose a novel method called RankRAG, which instruction-tunes a single LLM for both context ranking and answer generation in RAG. In particular, the instruction-tuned LLMs work surprisingly well by adding a small fraction of ranking data into the training blend, and outperform existing expert ranking models, including the same LLM exclusively fine-tuned on a large amount of ranking data. For generation, we compare our model with many strong baselines, including ChatQA-1.5, an open-sourced model with the state-of-the-art performance on RAG benchmarks. Specifically, our Llama3-RankRAG-8B and Llama3-RankRAG-70B significantly outperform Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B, respectively, on nine general knowledge-intensive benchmarks for RAG. In addition, it also performs comparably to GPT-4 on five RAG benchmarks in the biomedical domain without instruction fine-tuning on biomedical data, demonstrating its superb capability for generalization to new domains. | RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs | [
"Yue Yu",
"Wei Ping",
"Zihan Liu",
"Boxin Wang",
"Jiaxuan You",
"Chao Zhang",
"Mohammad Shoeybi",
"Bryan Catanzaro"
] | NeurIPS.cc/2024/Conference | 2407.02485 | [
""
] | https://huggingface.co/papers/2407.02485 | 3 | 5 | 1 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=S0Ci1AsJL5 | @inproceedings{
samsonov2024gaussian,
title={Gaussian Approximation and Multiplier Bootstrap for Polyak-Ruppert Averaged Linear Stochastic Approximation with Applications to {TD} Learning},
author={Sergey Samsonov and Eric Moulines and Qi-Man Shao and Zhuo-Song Zhang and Alexey Naumov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=S0Ci1AsJL5}
} | In this paper, we obtain the Berry–Esseen bound for multivariate normal approximation for the Polyak-Ruppert averaged iterates of the linear stochastic approximation (LSA) algorithm with decreasing step size. Moreover, we prove the non-asymptotic validity of the confidence intervals for parameter estimation with LSA based on multiplier bootstrap. This procedure updates the LSA estimate together with a set of randomly perturbed LSA estimates upon the arrival of subsequent observations. We illustrate our findings in the setting of temporal difference learning with linear function approximation. | Gaussian Approximation and Multiplier Bootstrap for Polyak-Ruppert Averaged Linear Stochastic Approximation with Applications to TD Learning | [
"Sergey Samsonov",
"Eric Moulines",
"Qi-Man Shao",
"Zhuo-Song Zhang",
"Alexey Naumov"
] | NeurIPS.cc/2024/Conference | 2405.16644 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RzlCqnncQv | @inproceedings{
mahdavi2024leveraging,
title={Leveraging Environment Interaction for Automated {PDDL} Translation and Planning with Large Language Models},
author={Sadegh Mahdavi and Raquel Aoki and Keyi Tang and Yanshuai Cao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RzlCqnncQv}
} | Large Language Models (LLMs) have shown remarkable performance in various natural language tasks, but they often struggle with planning problems that require structured reasoning. To address this limitation, the conversion of planning problems into the Planning Domain Definition Language (PDDL) has been proposed as a potential solution, enabling the use of automated planners. However, generating accurate PDDL files typically demands human inputs or correction, which can be time-consuming and costly. In this paper, we propose a novel approach that leverages LLMs and environment feedback to automatically generate PDDL domain and problem description files without the need for human intervention. Our method introduces an iterative refinement process that generates multiple problem PDDL candidates and progressively refines the domain PDDL based on feedback obtained from interacting with the environment. To guide the refinement process, we develop an Exploration Walk (EW) metric, which provides rich feedback signals for LLMs to update the PDDL file. We evaluate our approach on $10$ PDDL environments. We achieve an average task solve rate of 66\% compared to a 29\% solve rate by GPT-4's intrinsic planning with chain-of-thought prompting. Our work enables the automated modeling of planning environments using LLMs and environment feedback, eliminating the need for human intervention in the PDDL translation process and paving the way for more reliable LLM agents in challenging problems. Our code is available at https://github.com/BorealisAI/llm-pddl-planning | Leveraging Environment Interaction for Automated PDDL Translation and Planning with Large Language Models | [
"Sadegh Mahdavi",
"Raquel Aoki",
"Keyi Tang",
"Yanshuai Cao"
] | NeurIPS.cc/2024/Conference | 2407.12979 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RxkcroC8qP | @inproceedings{
li2024visual,
title={Visual Decoding and Reconstruction via {EEG} Embeddings with Guided Diffusion},
author={Dongyang Li and Chen Wei and Shiying Li and Jiachen Zou and Quanying Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RxkcroC8qP}
} | How to decode human vision through neural signals has attracted a long-standing interest in neuroscience and machine learning. Modern contrastive learning and generative models improved the performance of visual decoding and reconstruction based on functional Magnetic Resonance Imaging (fMRI). However, the high cost and low temporal resolution of fMRI limit their applications in brain-computer interfaces (BCIs), prompting a high need for visual decoding based on electroencephalography (EEG). In this study, we present an end-to-end EEG-based visual reconstruction zero-shot framework, consisting of a tailored brain encoder, called the Adaptive Thinking Mapper (ATM), which projects neural signals from different sources into the shared subspace as the clip embedding, and a two-stage multi-pipe EEG-to-image generation strategy. In stage one, EEG is embedded to align the high-level clip embedding, and then the prior diffusion model refines EEG embedding into image priors. A blurry image also decoded from EEG for maintaining the low-level feature. In stage two, we input both the high-level clip embedding, the blurry image and caption from EEG latent to a pre-trained diffusion model. Furthermore, we analyzed the impacts of different time windows and brain regions on decoding and reconstruction. The versatility of our framework is demonstrated in the magnetoencephalogram (MEG) data modality. The experimental results indicate that our EEG-based visual zero-shot framework achieves SOTA performance in classification, retrieval and reconstruction, highlighting the portability, low cost, and high temporal resolution of EEG, enabling a wide range of BCI applications. Our code is available at https://github.com/ncclab-sustech/EEG_Image_decode. | Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion | [
"Dongyang Li",
"Chen Wei",
"Shiying Li",
"Jiachen Zou",
"Quanying Liu"
] | NeurIPS.cc/2024/Conference | 2403.07721 | [
"https://github.com/dongyangli-del/eeg_image_decode"
] | https://huggingface.co/papers/2403.07721 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RxXdokK2qz | @inproceedings{
allmeier2024computing,
title={Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise},
author={Sebastian Allmeier and Nicolas Gast},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RxXdokK2qz}
} | We study stochastic approximation algorithms with Markovian noise and constant step-size $\alpha$. We develop a method based on infinitesimal generator comparisons to study the bias of the algorithm, which is the expected difference between $\theta_n$ ---the value at iteration $n$--- and $\theta^*$ ---the unique equilibrium of the corresponding ODE. We show that, under some smoothness conditions, this bias is of order $O(\alpha)$. Furthermore, we show that the time-averaged bias is equal to $\alpha V + O(\alpha^2)$, where $V$ is a constant characterized by a Lyapunov equation, showing that $E[\bar{\theta}_n] \approx \theta^*+V\alpha + O(\alpha^2)$, where $\bar{\theta}_n$ is the Polyak-Ruppert average. We also show that $\bar{\theta}_n$ converges with high probability around $\theta^*+\alpha V$. We illustrate how to combine this with Richardson-Romberg extrapolation to derive an iterative scheme with a bias of order $O(\alpha^2)$. | Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise | [
"Sebastian Allmeier",
"Nicolas Gast"
] | NeurIPS.cc/2024/Conference | 2405.14285 | [
"https://github.com/ngast/paper_bias_stochastic_approximation2024"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RxQoIekEa2 | @inproceedings{
korba2024statistical,
title={Statistical and Geometrical properties of the Kernel Kullback-Leibler divergence},
author={Anna Korba and Francis Bach and Cl{\'e}mentine Chazal},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RxQoIekEa2}
} | In this paper, we study the statistical and geometrical properties of the Kullback-Leibler divergence with kernel covariance operators (KKL) introduced by [Bach, 2022, Information Theory with Kernel Methods]. Unlike the classical Kullback-Leibler (KL) divergence that involves density ratios, the KKL compares probability distributions through covariance operators (embeddings) in a reproducible kernel Hilbert space (RKHS), and compute the Kullback-Leibler quantum divergence.
This novel divergence hence shares parallel but different aspects with both the standard Kullback-Leibler between probability distributions and kernel embeddings metrics such as the maximum mean discrepancy.
A limitation faced with the original KKL divergence is its inability to be defined for distributions with disjoint supports. To solve this problem, we propose in this paper a regularised variant that guarantees that divergence is well defined for all distributions. We derive bounds that quantify the deviation of the regularised KKL to the original one, as well as concentration bounds.
In addition, we provide a closed-form expression for the regularised KKL, specifically applicable when the distributions consist of finite sets of points, which makes it implementable.
Furthermore, we derive a Wasserstein gradient descent scheme of the KKL divergence in the case of discrete distributions, and study empirically its properties to transport a set of points to a target distribution. | Statistical and Geometrical properties of the Kernel Kullback-Leibler divergence | [
"Anna Korba",
"Francis Bach",
"Clémentine Chazal"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RwgNbIpCpk | @inproceedings{
cunningham2024reparameterized,
title={Reparameterized Multi-Resolution Convolutions for Long Sequence Modelling},
author={Harry Jake Cunningham and Giorgio Giannone and Mingtian Zhang and Marc Peter Deisenroth},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RwgNbIpCpk}
} | Global convolutions have shown increasing promise as powerful general-purpose sequence models. However, training long convolutions is challenging, and kernel parameterizations must be able to learn long-range dependencies without overfitting. This work introduces reparameterized multi-resolution convolutions ($\texttt{MRConv}$), a novel approach to parameterizing global convolutional kernels for long-sequence modeling. By leveraging multi-resolution convolutions, incorporating structural reparameterization and introducing learnable kernel decay, $\texttt{MRConv}$ learns expressive long-range kernels that perform well across various data modalities. Our experiments demonstrate state-of-the-art performance on the Long Range Arena, Sequential CIFAR, and Speech Commands tasks among convolution models and linear-time transformers. Moreover, we report improved performance on ImageNet classification by replacing 2D convolutions with 1D $\texttt{MRConv}$ layers. | Reparameterized Multi-Resolution Convolutions for Long Sequence Modelling | [
"Harry Jake Cunningham",
"Giorgio Giannone",
"Mingtian Zhang",
"Marc Peter Deisenroth"
] | NeurIPS.cc/2024/Conference | 2408.09453 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RwK0tgfptL | @inproceedings{
dhahri2024shaving,
title={Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood},
author={Rayen Dhahri and Alexander Immer and Bertrand Charpentier and Stephan G{\"u}nnemann and Vincent Fortuin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RwK0tgfptL}
} | Neural network sparsification is a promising avenue to save computational time and memory costs, especially in an age where many successful AI models are becoming too large to naively deploy on consumer hardware. While much work has focused on different weight pruning criteria, the overall sparsifiability of the network, i.e., its capacity to be pruned without quality loss, has often been overlooked. We present Sparsifiability via the Marginal likelihood (SpaM), a sparsification framework that highlights the effectiveness of using the Bayesian marginal likelihood in conjunction with sparsity-inducing priors for making neural networks more sparsifiable. Our approach implements an automatic Occam's razor that selects the most sparsifiable model that still explains the data well, both for structured and unstructured sparsification. In addition, we demonstrate that the pre-computed posterior precision from the Laplace approximation can be re-used to define a cheap pruning criterion, which outperforms many existing (more expensive) approaches. We demonstrate the effectiveness of our framework, especially at high sparsity levels, across a range of different neural network architectures and datasets. | Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood | [
"Rayen Dhahri",
"Alexander Immer",
"Bertrand Charpentier",
"Stephan Günnemann",
"Vincent Fortuin"
] | NeurIPS.cc/2024/Conference | 2402.15978 | [
"https://github.com/fortuinlab/spam-pruning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RwBObRsIzC | @inproceedings{
minixhofer2024zeroshot,
title={Zero-Shot Tokenizer Transfer},
author={Benjamin Minixhofer and Edoardo Ponti and Ivan Vuli{\'c}},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RwBObRsIzC}
} | Language models (LMs) are bound to their tokenizer, which maps raw text to a sequence of vocabulary items (tokens). This restricts their flexibility: for example, LMs trained primarily on English may still perform well in other natural and programming languages, but have vastly decreased efficiency due to their English-centric tokenizer. To mitigate this, we should be able to swap the original LM tokenizer with an arbitrary one, on the fly, without degrading performance. Hence, in this work we define a new problem: Zero-Shot Tokenizer Transfer (ZeTT). The challenge at the core of ZeTT is finding embeddings for the tokens in the vocabulary of the new tokenizer. Since prior heuristics for initializing embeddings often perform at chance level in a ZeTT setting, we propose a new solution: we train a hypernetwork taking a tokenizer as input and predicting the corresponding embeddings. We empirically demonstrate that the hypernetwork generalizes to new tokenizers both with encoder (e.g., XLM-R) and decoder LLMs (e.g., Mistral-7B). Our method comes close to the original models' performance in cross-lingual and coding tasks while markedly reducing the length of the tokenized sequence. We also find that the remaining gap can be quickly closed by continued training on less than 1B tokens. Finally, we show that a ZeTT hypernetwork trained for a base (L)LM can also be applied to fine-tuned variants without extra training. Overall, our results make substantial strides toward detaching LMs from their tokenizer. | Zero-Shot Tokenizer Transfer | [
"Benjamin Minixhofer",
"Edoardo Ponti",
"Ivan Vulić"
] | NeurIPS.cc/2024/Conference | 2405.07883 | [
"https://github.com/bminixhofer/zett"
] | https://huggingface.co/papers/2405.07883 | 1 | 4 | 2 | 3 | [
"benjamin/zett-hypernetwork-multilingual-Mistral-7B-v0.1"
] | [] | [] | [
"benjamin/zett-hypernetwork-multilingual-Mistral-7B-v0.1"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RvoxlFvnlX | @inproceedings{
huang2024robin,
title={{ROBIN}: Robust and Invisible Watermarks for Diffusion Models with Adversarial Optimization},
author={Huayang Huang and Yu Wu and Qian Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RvoxlFvnlX}
} | Watermarking generative content serves as a vital tool for authentication, ownership protection, and mitigation of potential misuse. Existing watermarking methods face the challenge of balancing robustness and concealment. They empirically inject a watermark that is both invisible and robust and passively achieve concealment by limiting the strength of the watermark, thus reducing the robustness. In this paper, we propose to explicitly introduce a watermark hiding process to actively achieve concealment, thus allowing the embedding of stronger watermarks. To be specific, we implant a robust watermark in an intermediate diffusion state and then guide the model to hide the watermark in the final generated image. We employ an adversarial optimization algorithm to produce the optimal hiding prompt guiding signal for each watermark. The prompt embedding is optimized to minimize artifacts in the generated image, while the watermark is optimized to achieve maximum strength. The watermark can be verified by reversing the generation process. Experiments on various diffusion models demonstrate the watermark remains verifiable even under significant image tampering and shows superior invisibility compared to other state-of-the-art robust watermarking methods. | ROBIN: Robust and Invisible Watermarks for Diffusion Models with Adversarial Optimization | [
"Huayang Huang",
"Yu Wu",
"Qian Wang"
] | NeurIPS.cc/2024/Conference | 2411.03862 | [
"https://github.com/hannah1102/robin"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Rv5dUg4JcZ | @inproceedings{
li2024learning,
title={Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise},
author={Shuyao Li and Sushrut Karmalkar and Ilias Diakonikolas and Jelena Diakonikolas},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Rv5dUg4JcZ}
} | We study the problem of learning a single neuron with respect to the $L_2^2$-loss in the presence of adversarial distribution shifts, where the labels can be arbitrary, and the goal is to find a "best-fit" function.
More precisely, given training samples from a reference distribution $p_0$,
the goal is to approximate the vector $\mathbf{w}^*$
which minimizes the squared loss with respect to the worst-case distribution
that is close in $\chi^2$-divergence to $p_{0}$.
We design a computationally efficient algorithm that recovers a vector $ \hat{\mathbf{w}}$
satisfying
$\mathbb{E}\_{p^*} (\sigma(\hat{\mathbf{w}} \cdot \mathbf{x}) - y)^2 \leq C \hspace{0.2em} \mathbb{E}\_{p^*} (\sigma(\mathbf{w}^* \cdot \mathbf{x}) - y)^2 + \epsilon$, where $C>1$ is a dimension-independent constant and $(\mathbf{w}^*, p^*)$ is the witness attaining the min-max risk
$\min_{\mathbf{w}:\|\mathbf{w}\| \leq W} \max\_{p} \mathbb{E}\_{(\mathbf{x}, y) \sim p} (\sigma(\mathbf{w} \cdot \mathbf{x}) - y)^2 - \nu \chi^2(p, p_0)$.
Our algorithm follows the primal-dual framework and is
designed by directly bounding the risk with respect to the original, nonconvex $L_2^2$ loss.
From an optimization standpoint, our work opens new avenues for the design of primal-dual algorithms under structured nonconvexity. | Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise | [
"Shuyao Li",
"Sushrut Karmalkar",
"Ilias Diakonikolas",
"Jelena Diakonikolas"
] | NeurIPS.cc/2024/Conference | 2411.06697 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RtMyTzIW6l | @inproceedings{
chen2024symilo,
title={Sym{ILO}: A Symmetry-Aware Learning Framework for Integer Linear Optimization},
author={Qian Chen and Tianjian Zhang and Linxin Yang and Qingyu Han and Akang Wang and Ruoyu Sun and Xiaodong Luo and Tsung-Hui Chang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RtMyTzIW6l}
} | Integer linear programs (ILPs) are commonly employed to model diverse practical problems such as scheduling and planning.
Recently, machine learning techniques have been utilized to solve ILPs. A straightforward idea is to train a model via supervised learning, with an ILP as the input and an optimal solution as the label. An ILP is symmetric if its variables can be permuted without changing the problem structure, resulting in numerous equivalent and optimal solutions. Randomly selecting an optimal solution as the label can introduce variability in the training data, which may hinder the model from learning stable patterns. In this work, we incorporate the intrinsic symmetry of ILPs and propose a novel training framework called SymILO. Specifically, we modify the learning task by introducing solution permutation along with neural network weights as learnable parameters and then design an alternating algorithm to jointly optimize the loss function.
We conduct extensive experiments on ILPs involving different symmetries and the computational results demonstrate that our symmetry-aware approach significantly outperforms three existing methods----achieving $50.3\\%$, $66.5\\%$, and $45.4\\%$ average improvements, respectively. | SymILO: A Symmetry-Aware Learning Framework for Integer Linear Optimization | [
"Qian Chen",
"Tianjian Zhang",
"Linxin Yang",
"Qingyu Han",
"Akang Wang",
"Ruoyu Sun",
"Xiaodong Luo",
"Tsung-Hui Chang"
] | NeurIPS.cc/2024/Conference | 2409.19678 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Rsb32EBmbj | @inproceedings{
qi2024exploring,
title={Exploring Adversarial Robustness of Deep State Space Models},
author={Biqing Qi and Yiang Luo and Junqi Gao and Pengfei Li and Kai Tian and Zhiyuan Ma and Bowen Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Rsb32EBmbj}
} | Deep State Space Models (SSMs) have proven effective in numerous task scenarios but face significant security challenges due to Adversarial Perturbations (APs) in real-world deployments. Adversarial Training (AT) is a mainstream approach to enhancing Adversarial Robustness (AR) and has been validated on various traditional DNN architectures. However, its effectiveness in improving the AR of SSMs remains unclear.
While many enhancements in SSM components, such as integrating Attention mechanisms and expanding to data-dependent SSM parameterizations, have brought significant gains in Standard Training (ST) settings, their potential benefits in AT remain unexplored. To investigate this, we evaluate existing structural variants of SSMs with AT to assess their AR performance. We observe that pure SSM structures struggle to benefit from AT, whereas incorporating Attention yields a markedly better trade-off between robustness and generalization for SSMs in AT compared to other components. Nonetheless, the integration of Attention also leads to Robust Overfitting (RO) issues.
To understand these phenomena, we empirically and theoretically analyze the output error of SSMs under AP. We find that fixed-parameterized SSMs have output error bounds strictly related to their parameters, limiting their AT benefits, while input-dependent SSMs may face the problem of error explosion. Furthermore, we show that the Attention component effectively scales the output error of SSMs during training, enabling them to benefit more from AT, but at the cost of introducing RO due to its high model complexity.
Inspired by this, we propose a simple and effective Adaptive Scaling (AdS) mechanism that brings AT performance close to Attention-integrated SSMs without introducing the issue of RO. | Exploring Adversarial Robustness of Deep State Space Models | [
"Biqing Qi",
"Yiang Luo",
"Junqi Gao",
"Pengfei Li",
"Kai Tian",
"Zhiyuan Ma",
"Bowen Zhou"
] | NeurIPS.cc/2024/Conference | 2406.05532 | [
"https://github.com/biqing-qi/exploring-adversarial-robustness-of-deep-state-space-models"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RsawwSBCs7 | @inproceedings{
shin2024otter,
title={{OTTER}: Effortless Label Distribution Adaptation of Zero-shot Models},
author={Changho Shin and Jitian Zhao and Sonia Cromp and Harit Vishwakarma and Frederic Sala},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RsawwSBCs7}
} | Popular zero-shot models suffer due to artifacts inherited from pretraining. One particularly detrimental issue, caused by unbalanced web-scale pretraining data, is mismatched label distribution. Existing approaches that seek to repair the label distribution are not suitable in zero-shot settings, as they have mismatching requirements, such as needing access to labeled downstream task data or knowledge of the true label balance in the pretraining distribution. We sidestep these challenges and introduce a simple and lightweight approach to adjust pretrained model predictions via optimal transport. Our technique requires only an estimate of the label distribution of a downstream task. Theoretically, we characterize the improvement produced by our procedure under certain mild conditions and provide bounds on the error caused by misspecification. Empirically, we validate our method in a wide array of zero-shot image and text classification tasks, improving accuracy by 4.8% and 15.9% on average, and beating baselines like prior matching---often by significant margins---in 17 out of 21 datasets. | OTTER: Effortless Label Distribution Adaptation of Zero-shot Models | [
"Changho Shin",
"Jitian Zhao",
"Sonia Cromp",
"Harit Vishwakarma",
"Frederic Sala"
] | NeurIPS.cc/2024/Conference | 2404.08461 | [
"https://github.com/sprocketlab/otter"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RrTjcbcHEH | @inproceedings{
s{\'a}r{\'a}ndi2024neural,
title={Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation},
author={Istv{\'a}n S{\'a}r{\'a}ndi and Gerard Pons-Moll},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RrTjcbcHEH}
} | With the explosive growth of available training data, single-image 3D human modeling is ahead of a transition to a data-centric paradigm.
A key to successfully exploiting data scale is to design flexible models that can be supervised from various heterogeneous data sources produced by different researchers or vendors.
To this end, we propose a simple yet powerful paradigm for seamlessly unifying different human pose and shape-related tasks and datasets.
Our formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume, and obtain its estimated location in 3D.
We achieve this by learning a continuous neural field of body point localizer functions, each of which is a differently parameterized 3D heatmap-based convolutional point localizer (detector).
For generating parametric output, we propose an efficient post-processing step for fitting SMPL-family body models to nonparametric joint and vertex predictions.
With this approach, we can naturally exploit differently annotated data sources including mesh, 2D/3D skeleton and dense pose, without having to convert between them, and thereby train large-scale 3D human mesh and skeleton estimation models that outperform the state-of-the-art on several public benchmarks including 3DPW, EMDB, EHF, SSP-3D and AGORA by a considerable margin.
We release our code and models to foster downstream research. | Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation | [
"István Sárándi",
"Gerard Pons-Moll"
] | NeurIPS.cc/2024/Conference | 2407.07532 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RqvesBxqDo | @inproceedings{
wu2024qvaemole,
title={{QVAE}-Mole: The Quantum {VAE} with Spherical Latent Variable Learning for 3-D Molecule Generation},
author={Huaijin Wu and Xinyu Ye and Junchi Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RqvesBxqDo}
} | Molecule generation ideally in its 3-D form has enjoyed wide applications in material, chemistry, life science, etc. We propose the first quantum parametric circuit for 3-D molecule generation for its potential quantum advantage especially considering the arrival of Noisy Intermediate-Scale Quantum (NISQ) era. We choose the Variational AutoEncoder (VAE) scheme for its simplicity and one-shot generation ability, which we believe is more quantum-friendly compared with the auto-regressive generative models or diffusion models as used in classic approaches. Specifically, we present a quantum encoding scheme designed for 3-D molecules with qubits complexity $\mathcal{O}(C\log n)$ ($n$ is the number of atoms) and adopt a von Mises-Fisher (vMF) distributed latent space to meet the inherent coherence of the quantum system. We further design to encode conditions into quantum circuits for property-specified generation. Experimentally, our model could generate plausible 3-D molecules and achieve competitive quantitative performance with significantly reduced circuit parameters compared with their classic counterparts. The source code will be released upon publication. | QVAE-Mole: The Quantum VAE with Spherical Latent Variable Learning for 3-D Molecule Generation | [
"Huaijin Wu",
"Xinyu Ye",
"Junchi Yan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RnxJc4vTVi | @inproceedings{
chen2024scar,
title={{SC}aR: Refining Skill Chaining for Long-Horizon Robotic Manipulation via Dual Regularization},
author={Zixuan Chen and Ze Ji and Jing Huo and Yang Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RnxJc4vTVi}
} | Long-horizon robotic manipulation tasks typically involve a series of interrelated sub-tasks spanning multiple execution stages. Skill chaining offers a feasible solution for these tasks by pre-training the skills for each sub-task and linking them sequentially. However, imperfections in skill learning or disturbances during execution can lead to the accumulation of errors in skill chaining process, resulting in execution failures. In this paper, we investigate how to achieve stable and smooth skill chaining for long-horizon robotic manipulation tasks. Specifically, we propose a novel skill chaining framework called Skill Chaining via Dual Regularization (SCaR). This framework applies dual regularization to sub-task skill pre-training and fine-tuning, which not only enhances the intra-skill dependencies within each sub-task skill but also reinforces the inter-skill dependencies between sequential sub-task skills, thus ensuring smooth skill chaining and stable long-horizon execution. We evaluate the SCaR framework on two representative long-horizon robotic manipulation simulation benchmarks: IKEA furniture assembly and kitchen organization. Additionally, we conduct a simple real-world validation in tabletop robot pick-and-place tasks. The experimental results show that, with the support of SCaR, the robot achieves a higher success rate in long-horizon tasks compared to relevant baselines and demonstrates greater robustness to perturbations. | SCaR: Refining Skill Chaining for Long-Horizon Robotic Manipulation via Dual Regularization | [
"Zixuan Chen",
"Ze Ji",
"Jing Huo",
"Yang Gao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RnvgYd9RAh | @inproceedings{
stengel-eskin2024lacie,
title={{LACIE}: Listener-Aware Finetuning for Calibration in Large Language Models},
author={Elias Stengel-Eskin and Peter Hase and Mohit Bansal},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RnvgYd9RAh}
} | When answering questions, large language models (LLMs) can convey not only an answer to the question, but a level of confidence about the answer being correct. This includes explicit markers of confidence (e.g. giving a numeric confidence score) as well as implicit markers, like using an authoritative tone or elaborating with additional knowledge of a subject. For LLMs to be trustworthy sources of knowledge, the confidence they convey should match their actual expertise on a topic; however, this is currently not the case, with most models tending towards overconfidence. To calibrate both implicit and explicit confidence markers, we introduce a pragmatic, listener-aware finetuning method (LACIE) that directly models the listener, considering not only whether an answer is right, but whether it will be accepted by a listener. Specifically, we cast calibration as a preference optimization problem, creating data via a two-agent speaker-listener game, where a speaker model’s outputs are judged by a simulated listener. We then finetune three different LLMs (Mistral-7B, Llama3-8B, Llama3-70B) with LACIE, and show that the models resulting from this multi-agent optimization are better calibrated on TriviaQA with respect to a simulated listener. Crucially, these trends transfer to human listeners, helping them correctly predict model correctness: we conduct a human evaluation where annotators accept or reject an LLM’s answers to trivia questions, finding that training with LACIE results in 47% fewer incorrect answers being accepted while maintaining the same level of acceptance for correct answers. Furthermore, LACIE generalizes to another dataset, resulting in a large increase in truthfulness on TruthfulQA when trained on TriviaQA. Our analysis indicates that LACIE leads to a better separation in confidence between correct and incorrect examples. Qualitatively, we find that a LACIE-trained model hedges more when uncertain and adopts implicit cues to signal certainty when it is correct, such as using an authoritative tone or including details. Finally, finetuning with our listener- aware method leads to an emergent increase in model abstention (e.g. saying “I don’t know”) for answers that are likely to be wrong, trading recall for precision. | LACIE: Listener-Aware Finetuning for Calibration in Large Language Models | [
"Elias Stengel-Eskin",
"Peter Hase",
"Mohit Bansal"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RnQdRY1h5v | @inproceedings{
zancato2024bmojo,
title={B'{MOJO}: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory},
author={Luca Zancato and Arjun Seshadri and Yonatan Dukler and Aditya Golatkar and Yantao Shen and Benjamin Bowman and Matthew Trager and Alessandro Achille and Stefano Soatto},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RnQdRY1h5v}
} | We describe a family of architectures to support transductive inference by allowing memory to grow to a finite but a-priori unknown bound while making efficient use of finite resources for inference. Current architectures use such resources to represent data either eidetically over a finite span ('context' in Transformers), or fading over an infinite span (in State Space Models, or SSMs). Recent hybrid architectures have combined eidetic and fading memory, but with limitations that do not allow the designer or the learning process to seamlessly modulate the two, nor to extend the eidetic memory span. We leverage ideas from Stochastic Realization Theory to develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an elementary composable module. The overall architecture can be used to implement models that can access short-term eidetic memory 'in-context,' permanent structural memory 'in-weights,' fading memory 'in-state,' and long-term eidetic memory 'in-storage' by natively incorporating retrieval from an asynchronously updated memory. We show that Transformers, existing SSMs such as Mamba, and hybrid architectures such as Jamba are special cases of B'MOJO and describe a basic implementation, to be open sourced, that can be stacked and scaled efficiently in hardware. We test B'MOJO on transductive inference tasks, such as associative recall, where it outperforms existing SSMs and Hybrid models; as a baseline, we test ordinary language modeling where B'MOJO achieves perplexity comparable to similarly-sized Transformers and SSMs up to 1.4B parameters, while being up to 10% faster to train. Finally, we test whether models trained inductively on a-priori bounded sequences (up to 8K tokens) can still perform transductive inference on sequences many-fold longer. B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens, four-fold the length of the longest sequences seen during training. | B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory | [
"Luca Zancato",
"Arjun Seshadri",
"Yonatan Dukler",
"Aditya Golatkar",
"Yantao Shen",
"Benjamin Bowman",
"Matthew Trager",
"Alessandro Achille",
"Stefano Soatto"
] | NeurIPS.cc/2024/Conference | 2407.06324 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RlZgnEZsOH | @inproceedings{
zeng2024huref,
title={HuRef: {HU}man-{RE}adable Fingerprint for Large Language Models},
author={Boyi Zeng and Lizheng Wang and Yuncong Hu and Yi Xu and Chenghu Zhou and Xinbing Wang and Yu Yu and Zhouhan Lin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RlZgnEZsOH}
} | Protecting the copyright of large language models (LLMs) has become crucial due to their resource-intensive training and accompanying carefully designed licenses. However, identifying the original base model of an LLM is challenging due to potential parameter alterations. In this
study, we introduce HuRef, a human-readable fingerprint for LLMs that uniquely identifies the base model without interfering with training or exposing model parameters to the public.
We first observe that the vector direction of LLM parameters remains stable after the model has converged during pretraining,
with negligible perturbations through subsequent training steps, including continued pretraining, supervised fine-tuning, and RLHF,
which makes it a sufficient condition
to identify the base model.
The necessity is validated by continuing to train an LLM with an extra term to drive away the model parameters' direction and the model becomes damaged. However, this direction is vulnerable to simple attacks like dimension permutation or matrix rotation, which significantly change it without affecting performance. To address this, leveraging the Transformer structure, we systematically analyze potential attacks and define three invariant terms that identify an LLM's base model.
Due to the potential risk of information leakage, we cannot publish invariant terms directly. Instead, we map them to a Gaussian vector using an encoder, then convert it into a natural image using StyleGAN2, and finally publish the image. In our black-box setting, all fingerprinting steps are internally conducted by the LLMs owners. To ensure the published fingerprints are honestly generated, we introduced Zero-Knowledge Proof (ZKP).
Experimental results across various LLMs demonstrate the effectiveness of our method. The code is available at https://github.com/LUMIA-Group/HuRef. | HuRef: HUman-REadable Fingerprint for Large Language Models | [
"Boyi Zeng",
"Lizheng Wang",
"Yuncong Hu",
"Yi Xu",
"Chenghu Zhou",
"Xinbing Wang",
"Yu Yu",
"Zhouhan Lin"
] | NeurIPS.cc/2024/Conference | 2312.04828 | [
"https://github.com/lumia-group/huref"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RkOT8rAmRR | @inproceedings{
le2024optimalstate,
title={Optimal-state Dynamics Estimation for Physics-based Human Motion Capture from Videos},
author={Cuong Le and Manon Kok and Viktor Johansson and Bastian Wandt},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RkOT8rAmRR}
} | Human motion capture from monocular videos has made significant progress in recent years. However, modern approaches often produce temporal artifacts, e.g. in form of jittery motion and struggle to achieve smooth and physically plausible motions. Explicitly integrating physics, in form of internal forces and exterior torques, helps alleviating these artifacts. Current state-of-the-art approaches make use of an automatic PD controller to predict torques and reaction forces in order to re-simulate the input kinematics, i.e. the joint angles of a predefined skeleton. However, due to imperfect physical models, these methods often require simplifying assumptions and extensive preprocessing of the input kinematics to achieve good performance. To this end, we propose a novel method to selectively incorporate the physics models with the kinematics observations in an online setting, inspired by a neural Kalman-filtering approach. We develop a control loop as a meta-PD controller to predict internal joint torques and external reaction forces, followed by a physics-based motion simulation. A recurrent neural network is introduced to realize a Kalman filter that attentively balances the kinematics input and simulated motion, resulting in an optimal-state dynamics prediction. We show that this filtering step is crucial to provide an online supervision that helps balancing the shortcoming of the respective input motions, thus being important for not only capturing accurate global motion trajectories but also producing physically plausible human poses. The proposed approach excels in the physics-based human pose estimation task and demonstrates the physical plausibility of the predictive dynamics, compared to state of the art. The code is available on https://github.com/cuongle1206/OSDCap. | Optimal-state Dynamics Estimation for Physics-based Human Motion Capture from Videos | [
"Cuong Le",
"Manon Kok",
"Viktor Johansson",
"Bastian Wandt"
] | NeurIPS.cc/2024/Conference | 2410.07795 | [
"https://github.com/cuongle1206/osdcap"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RfsfRn9OFd | @inproceedings{
liu2024eegvideo,
title={{EEG}2Video: Towards Decoding Dynamic Visual Perception from {EEG} Signals},
author={Xuanhao Liu and Yan-Kai Liu and Yansen Wang and Kan Ren and Hanwen Shi and Zilong Wang and Dongsheng Li and Bao-liang Lu and Wei-Long Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RfsfRn9OFd}
} | Our visual experience in daily life are dominated by dynamic change. Decoding such dynamic information from brain activity can enhance the understanding of the brain’s visual processing system. However, previous studies predominately focus on reconstructing static visual stimuli. In this paper, we explore to decode dynamic visual perception from electroencephalography (EEG), a neuroimaging technique able to record brain activity with high temporal resolution (1000 Hz) for capturing rapid changes in brains. Our contributions are threefold: Firstly, we develop a large dataset recording signals from 20 subjects while they were watching 1400 dynamic video clips of 40 concepts. This dataset fills the gap in the lack of EEG-video pairs. Secondly, we annotate each video clips to investigate the potential for decoding some specific meta information (e.g., color, dynamic, human or not) from EEG. Thirdly, we propose a novel baseline EEG2Video for video reconstruction from EEG signals that better aligns dynamic movements with high temporal resolution brain signals by Seq2Seq architecture. EEG2Video achieves a 2-way accuracy of 79.8% in semantic classification tasks and 0.256 in structural similarity index (SSIM). Overall, our works takes an important step towards decoding dynamic visual perception from EEG signals. Our dataset and code will be released soon. | EEG2Video: Towards Decoding Dynamic Visual Perception from EEG Signals | [
"Xuanhao Liu",
"Yan-Kai Liu",
"Yansen Wang",
"Kan Ren",
"Hanwen Shi",
"Zilong Wang",
"Dongsheng Li",
"Bao-liang Lu",
"Wei-Long Zheng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RfSvAom7sS | @inproceedings{
zhou2024sample,
title={Sample Efficient Bayesian Learning of Causal Graphs from Interventions},
author={Zihan Zhou and Muhammad Qasim Elahi and Murat Kocaoglu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RfSvAom7sS}
} | Causal discovery is a fundamental problem with applications spanning various areas in science and engineering. It is well understood that solely using observational data, one can only orient the causal graph up to its Markov equivalence class, necessitating interventional data to learn the complete causal graph. Most works in the literature design causal discovery policies with perfect interventions, i.e., they have access to infinite interventional samples. This study considers a Bayesian approach for learning causal graphs with limited interventional samples, mirroring real-world scenarios where such samples are usually costly to obtain. By leveraging the recent result of Wienöbst et al. [2023] on uniform DAG sampling in polynomial time, we can efficiently enumerate all the cut configurations and their corresponding interventional distributions of a target set, and further track their posteriors. Given any number of interventional samples, our proposed algorithm randomly intervenes on a set of target vertices that cut all the edges in the graph and returns a causal graph according to the posterior of each target set. When the number of interventional samples is large enough, we show theoretically that our proposed algorithm will return the true causal graph with high probability. We compare our algorithm against various baseline methods on simulated datasets, demonstrating its superior accuracy measured by the structural Hamming distance between the learned DAG and the ground truth. Additionally, we present a case study showing how this algorithm could be modified to answer more general causal questions without learning the whole graph. As an example, we illustrate that our method can be used to estimate the causal effect of a variable that cannot be intervened. | Sample Efficient Bayesian Learning of Causal Graphs from Interventions | [
"Zihan Zhou",
"Muhammad Qasim Elahi",
"Murat Kocaoglu"
] | NeurIPS.cc/2024/Conference | 2410.20089 | [
"https://github.com/CausalML-Lab/Bayesian_SampleEfficient_Discovery"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RcPHbofiCN | @inproceedings{
lin2024mixture,
title={Mixture of In-Context Experts Enhance {LLM}s' Long Context Awareness},
author={Hongzhan Lin and Ang Lv and Yuhan Chen and Chen Zhu and Yang Song and Hengshu Zhu and Rui Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RcPHbofiCN}
} | Many studies have revealed that large language models (LLMs) exhibit uneven awareness of different contextual positions. Their limited context awareness can lead to overlooking critical information and subsequent task failures. While several approaches have been proposed to enhance LLMs' context awareness, achieving both effectiveness and efficiency remains challenging. In this paper, for LLMs utilizing RoPE as position embeddings, we introduce a novel method called "Mixture of In-Context Experts" (MoICE) to address this challenge. MoICE comprises two key components: a router integrated into each attention head within LLMs and a lightweight router-only training optimization strategy:(1) MoICE views each RoPE angle as an 'in-context' expert, demonstrated to be capable of directing the attention of a head to specific contextual positions. Consequently, each attention head flexibly processes tokens using multiple RoPE angles dynamically selected by the router to attend to the needed positions. This approach mitigates the risk of overlooking essential contextual information. (2) The router-only training strategy entails freezing LLM parameters and exclusively updating routers for only a few steps. When applied to open-source LLMs including Llama and Mistral, MoICE surpasses prior methods across multiple tasks on long context understanding and generation, all while maintaining commendable inference efficiency. | Mixture of In-Context Experts Enhance LLMs' Long Context Awareness | [
"Hongzhan Lin",
"Ang Lv",
"Yuhan Chen",
"Chen Zhu",
"Yang Song",
"Hengshu Zhu",
"Rui Yan"
] | NeurIPS.cc/2024/Conference | 2406.19598 | [
"https://github.com/p1nksnow/moice"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RcPAJAnpnm | @inproceedings{
lee2024incremental,
title={Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation},
author={Daehee Lee and Minjong Yoo and Woo Kyung Kim and Wonje Choi and Honguk Woo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RcPAJAnpnm}
} | Continual Imitation Learning (CiL) involves extracting and accumulating task knowledge from demonstrations across multiple stages and tasks to achieve a multi-task policy. With recent advancements in foundation models, there has been a growing interest in adapter-based CiL approaches, where adapters are established parameter-efficiently for tasks newly demonstrated. While these approaches isolate parameters for specific tasks and tend to mitigate catastrophic forgetting, they limit knowledge sharing among different demonstrations. We introduce IsCiL, an adapter-based CiL framework that addresses this limitation of knowledge sharing by incrementally learning shareable skills from different demonstrations, thus enabling sample-efficient task adaptation using the skills particularly in non-stationary CiL environments. In IsCiL, demonstrations are mapped into the state embedding space, where proper skills can be retrieved upon input states through prototype-based memory. These retrievable skills are incrementally learned on their corresponding adapters. Our CiL experiments with complex tasks in the Franka-Kitchen and Meta-World demonstrate the robust performance of IsCiL in both task adaptation and sample-efficiency. We also show a simple extension of IsCiL for task unlearning scenarios. | Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation | [
"Daehee Lee",
"Minjong Yoo",
"Woo Kyung Kim",
"Wonje Choi",
"Honguk Woo"
] | NeurIPS.cc/2024/Conference | 2410.22658 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RbU10yvkk6 | @inproceedings{
zhu2024scaling,
title={Scaling the Codebook Size of {VQ}-{GAN} to 100,000 with a Utilization Rate of 99\%},
author={Lei Zhu and Fangyun Wei and Yanye Lu and Dong Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RbU10yvkk6}
} | In the realm of image quantization exemplified by VQGAN, the process encodes images into discrete tokens drawn from a codebook with a predefined size. Recent advancements, particularly with LLAMA 3, reveal that enlarging the codebook significantly enhances model performance. However, VQGAN and its derivatives, such as VQGAN-FC (Factorized Codes) and VQGAN-EMA, continue to grapple with challenges related to expanding the codebook size and enhancing codebook utilization. For instance, VQGAN-FC is restricted to learning a codebook with a maximum size of 16,384, maintaining a typically low utilization rate of less than 12% on ImageNet. In this work, we propose a novel image quantization model named VQGAN-LC (Large Codebook), which extends the codebook size to 100,000, achieving an utilization rate exceeding 99%. Unlike previous methods that optimize each codebook entry, our approach begins with a codebook initialized with 100,000 features extracted by a pre-trained vision encoder. Optimization then focuses on training a projector that aligns the entire codebook with the feature distributions of the encoder in VQGAN-LC. We demonstrate the superior performance of our model over its counterparts across a variety of tasks, including image reconstruction, image classification, auto-regressive image generation using GPT, and image creation with diffusion- and flow-based generative models. | Scaling the Codebook Size of VQ-GAN to 100,000 with a Utilization Rate of 99 | [
"Lei Zhu",
"Fangyun Wei",
"Yanye Lu",
"Dong Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RbS7RWxw3r | @inproceedings{
ma2024iteratively,
title={Iteratively Refined Behavior Regularization for Offline Reinforcement Learning},
author={Yi Ma and Jianye HAO and Xiaohan Hu and YAN ZHENG and Chenjun Xiao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RbS7RWxw3r}
} | One of the fundamental challenges for offline reinforcement learning (RL) is ensuring robustness to data distribution. Whether the data originates from a near-optimal policy or not, we anticipate that an algorithm should demonstrate its ability to learn an effective control policy that seamlessly aligns with the inherent distribution of offline data. Unfortunately, behavior regularization, a simple yet effective offline RL algorithm, tends to struggle in this regard. In this paper, we propose a new algorithm that substantially enhances behavior-regularization based on conservative policy iteration. Our key observation is that by iteratively refining the reference policy used for behavior regularization, conservative policy update guarantees gradually improvement, while also implicitly avoiding querying out-of-sample actions to prevent catastrophic learning failures. We prove that in the tabular setting this algorithm is capable of learning the optimal policy covered by the offline dataset, commonly referred to as the in-sample optimal policy. We then explore several implementation details of the algorithm when function approximations are applied. The resulting algorithm is easy to implement, requiring only a few lines of code modification to existing methods. Experimental results on the D4RL benchmark indicate that our method outperforms previous state-of-the-art baselines in most tasks, clearly demonstrate its superiority over behavior regularization. | Iteratively Refined Behavior Regularization for Offline Reinforcement Learning | [
"Yi Ma",
"Jianye HAO",
"Xiaohan Hu",
"YAN ZHENG",
"Chenjun Xiao"
] | NeurIPS.cc/2024/Conference | 2306.05726 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RaNct2xkyI | @inproceedings{
yang2024featurelevel,
title={Feature-Level Adversarial Attacks and Ranking Disruption for Visible-Infrared Person Re-identification},
author={Xi Yang and Huanling Liu and De Cheng and Nannan Wang and Xinbo Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RaNct2xkyI}
} | Visible-infrared person re-identification (VIReID) is widely used in fields such as video surveillance and intelligent transportation, imposing higher demands on model security. In practice, the adversarial attacks based on VIReID aim to disrupt output ranking and quantify the security risks of models. Although numerous studies have been emerged on adversarial attacks and defenses in fields such as face recognition, person re-identification, and pedestrian detection, there is currently a lack of research on the security of VIReID systems. To this end, we propose to explore the vulnerabilities of VIReID systems and prevent potential serious losses due to insecurity. Compared to research on single-modality ReID, adversarial feature alignment and modality differences need to be particularly emphasized. Thus, we advocate for feature-level adversarial attacks to disrupt the output rankings of VIReID systems. To obtain adversarial features, we introduce \textit{Universal Adversarial Perturbations} (UAP) to simulate common disturbances in real-world environments. Additionally, we employ a \textit{Frequency-Spatial Attention Module} (FSAM), integrating frequency information extraction and spatial focusing mechanisms, and further emphasize important regional features from different domains on the shared features. This ensures that adversarial features maintain consistency within the feature space. Finally, we employ an \textit{Auxiliary Quadruple Adversarial Loss} to amplify the differences between modalities, thereby improving the distinction and recognition of features between visible and infrared images, which causes the system to output incorrect rankings. Extensive experiments on two VIReID benchmarks (i.e., SYSU-MM01, RegDB) and different systems validate the effectiveness of our method. | Feature-Level Adversarial Attacks and Ranking Disruption for Visible-Infrared Person Re-identification | [
"Xi Yang",
"Huanling Liu",
"De Cheng",
"Nannan Wang",
"Xinbo Gao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RZZo23pQFL | @inproceedings{
ma2024ssaseg,
title={{SSA}-Seg: Semantic and Spatial Adaptive Pixel-level Classifier for Semantic Segmentation},
author={Xiaowen Ma and Zhen-Liang Ni and Xinghao Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RZZo23pQFL}
} | Vanilla pixel-level classifiers for semantic segmentation are based on a certain paradigm, involving the inner product of fixed prototypes obtained from the training set and pixel features in the test image. This approach, however, encounters significant limitations, i.e., feature deviation in the semantic domain and information loss in the spatial domain. The former struggles with large intra-class variance among pixel features from different images, while the latter fails to utilize the structured information of semantic objects effectively. This leads to blurred mask boundaries as well as a deficiency of fine-grained recognition capability. In this paper, we propose a novel Semantic and Spatial Adaptive Classifier (SSA-Seg) to address the above challenges. Specifically, we employ the coarse masks obtained from the fixed prototypes as a guide to adjust the fixed prototype towards the center of the semantic and spatial domains in the test image. The adapted prototypes in semantic and spatial domains are then simultaneously considered to accomplish classification decisions. In addition, we propose an online multi-domain distillation learning strategy to improve the adaption process. Experimental results on three publicly available benchmarks show that the proposed SSA-Seg significantly improves the segmentation performance of the baseline models with only a minimal increase in computational cost. | SSA-Seg: Semantic and Spatial Adaptive Pixel-level Classifier for Semantic Segmentation | [
"Xiaowen Ma",
"Zhen-Liang Ni",
"Xinghao Chen"
] | NeurIPS.cc/2024/Conference | 2405.06525 | [
"https://github.com/xwmaxwma/ssa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RYQ0KuZvkL | @inproceedings{
narang2024sample,
title={Sample Complexity Reduction via Policy Difference Estimation in Tabular Reinforcement Learning},
author={Adhyyan Narang and Andrew Wagenmaker and Lillian J. Ratliff and Kevin Jamieson},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RYQ0KuZvkL}
} | In this paper, we study the non-asymptotic sample complexity for the pure exploration problem in contextual bandits and tabular reinforcement learning (RL): identifying an $\epsilon$-optimal policy from a set of policies $\Pi$ with high probability. Existing work in bandits has shown that it is possible to identify the best policy by estimating only the *difference* between the behaviors of individual policies–which can have substantially lower variance than estimating the behavior of each policy directly—yet the best-known complexities in RL fail to take advantage of this, and instead estimate the behavior of each policy directly. Does it suffice to estimate only the differences in the behaviors of policies in RL? We answer this question positively for contextual bandits, but in the negative for tabular RL, showing a separation between contextual bandits and RL. However, inspired by this, we show that it *almost* suffices to estimate only the differences in RL: if we can estimate the behavior of a *single* reference policy, it suffices to only estimate how any other policy deviates from this reference policy. We develop an algorithm which instantiates this principle and obtains, to the best of our knowledge, the tightest known bound on the sample complexity of tabular RL. | Sample Complexity Reduction via Policy Difference Estimation in Tabular Reinforcement Learning | [
"Adhyyan Narang",
"Andrew Wagenmaker",
"Lillian J. Ratliff",
"Kevin Jamieson"
] | NeurIPS.cc/2024/Conference | 2406.06856 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=RY3rDQV0tQ | @inproceedings{
oguz2024optical,
title={Optical Diffusion Models for Image Generation},
author={Ilker Oguz and Niyazi Ulas Dinc and Mustafa Yildirim and Junjie Ke and Innfarn Yoo and QIFEI WANG and Feng Yang and Christophe Moser and Demetri Psaltis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RY3rDQV0tQ}
} | Diffusion models generate new samples by progressively decreasing the noise from the initially provided random distribution. This inference procedure generally utilizes a trained neural network numerous times to obtain the final output, creating significant latency and energy consumption on digital electronic hardware such as GPUs. In this study, we demonstrate that the propagation of a light beam through a transparent medium can be programmed to implement a denoising diffusion model on image samples. This framework projects noisy image patterns through passive diffractive optical layers, which collectively only transmit the predicted noise term in the image. The optical transparent layers, which are trained with an online training approach, backpropagating the error to the analytical model of the system, are passive and kept the same across different steps of denoising. Hence this method enables high-speed image generation with minimal power consumption, benefiting from the bandwidth and energy efficiency of optical information processing. | Optical Diffusion Models for Image Generation | [
"Ilker Oguz",
"Niyazi Ulas Dinc",
"Mustafa Yildirim",
"Junjie Ke",
"Innfarn Yoo",
"QIFEI WANG",
"Feng Yang",
"Christophe Moser",
"Demetri Psaltis"
] | NeurIPS.cc/2024/Conference | 2407.10897 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RXLO4Zv3wB | @inproceedings{
wu2024ddr,
title={{DDR}: Exploiting Deep Degradation Response as Flexible Image Descriptor},
author={Juncheng Wu and Zhangkai Ni and Hanli Wang and Wenhan Yang and Yuyin Zhou and Shiqi Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RXLO4Zv3wB}
} | Image deep features extracted by pre-trained networks are known to contain rich and informative representations. In this paper, we present Deep Degradation Response (DDR), a method to quantify changes in image deep features under varying degradation conditions. Specifically, our approach facilitates flexible and adaptive degradation, enabling the controlled synthesis of image degradation through text-driven prompts. Extensive evaluations demonstrate the versatility of DDR as an image descriptor, with strong correlations observed with key image attributes such as complexity, colorfulness, sharpness, and overall quality. Moreover, we demonstrate the efficacy of DDR across a spectrum of applications. It excels as a blind image quality assessment metric, outperforming existing methodologies across multiple datasets. Additionally, DDR serves as an effective unsupervised learning objective in image restoration tasks, yielding notable advancements in image deblurring and single-image super-resolution. Our code is available at: https://github.com/eezkni/DDR. | DDR: Exploiting Deep Degradation Response as Flexible Image Descriptor | [
"Juncheng Wu",
"Zhangkai Ni",
"Hanli Wang",
"Wenhan Yang",
"Yuyin Zhou",
"Shiqi Wang"
] | NeurIPS.cc/2024/Conference | 2406.08377 | [
"https://github.com/eezkni/ddr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RVZfra6sZo | @inproceedings{
dai2024ddn,
title={{DDN}: Dual-domain Dynamic Normalization for Non-stationary Time Series Forecasting},
author={Tao Dai and Beiliang Wu and Peiyuan Liu and Naiqi Li and Xue Yuerong and Shu-Tao Xia and Zexuan Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RVZfra6sZo}
} | Deep neural networks (DNNs) have recently achieved remarkable advancements in time series forecasting (TSF) due to their powerful ability of sequence dependence modeling. To date, existing DNN-based TSF methods still suffer from unreliable predictions for real-world data due to its non-stationarity characteristics, i.e., data distribution varies quickly over time. To mitigate this issue, several normalization methods (e.g., SAN) have recently been specifically designed by normalization in a fixed period/window in the time domain. However, these methods still struggle to capture distribution variations, due to the complex time patterns of time series in the time domain. Based on the fact that wavelet transform can decompose time series into a linear combination of different frequencies, which exhibits distribution variations with time-varying periods, we propose a novel Dual-domain Dynamic Normalization (DDN) to dynamically capture distribution variations in both time and frequency domains. Specifically, our DDN tries to eliminate the non-stationarity of time series via both frequency and time domain normalization in a sliding window way. Besides, our DDN can serve as a plug-in-play module, and thus can be easily incorporated into other forecasting models. Extensive experiments on public benchmark datasets under different forecasting models demonstrate the superiority of our DDN over other normalization methods. Code will be made available following the review process. | DDN: Dual-domain Dynamic Normalization for Non-stationary Time Series Forecasting | [
"Tao Dai",
"Beiliang Wu",
"Peiyuan Liu",
"Naiqi Li",
"Xue Yuerong",
"Shu-Tao Xia",
"Zexuan Zhu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RSs4o7CSqe | @inproceedings{
cao2024conditional,
title={Conditional Controllable Image Fusion},
author={Bing Cao and Xingxin Xu and Pengfei Zhu and Qilong Wang and Qinghua Hu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RSs4o7CSqe}
} | Image fusion aims to integrate complementary information from multiple input images acquired through various sources to synthesize a new fused image. Existing methods usually employ distinct constraint designs tailored to specific scenes, forming fixed fusion paradigms. However, this data-driven fusion approach is challenging to deploy in varying scenarios, especially in rapidly changing environments. To address this issue, we propose a conditional controllable fusion (CCF) framework for general image fusion tasks without specific training. Due to the dynamic differences of different samples, our CCF employs specific fusion constraints for each individual in practice. Given the powerful generative capabilities of the denoising diffusion model, we first inject the specific constraints into the pre-trained DDPM as adaptive fusion conditions. The appropriate conditions are dynamically selected to ensure the fusion process remains responsive to the specific requirements in each reverse diffusion stage. Thus, CCF enables conditionally calibrating the fused images step by step. Extensive experiments validate our effectiveness in general fusion tasks across diverse scenarios against the competing methods without additional training. The code is publicly available. | Conditional Controllable Image Fusion | [
"Bing Cao",
"Xingxin Xu",
"Pengfei Zhu",
"Qilong Wang",
"Qinghua Hu"
] | NeurIPS.cc/2024/Conference | 2411.01573 | [
"https://github.com/jehovahxu/CCF"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RSiGFzQapl | @inproceedings{
han2024bridging,
title={Bridging the Divide: Reconsidering Softmax and Linear Attention},
author={Dongchen Han and Yifan Pu and Zhuofan Xia and Yizeng Han and Xuran Pan and Xiu Li and Jiwen Lu and Shiji Song and Gao Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RSiGFzQapl}
} | Widely adopted in modern Vision Transformer designs, Softmax attention can effectively capture long-range visual information; however, it incurs excessive computational cost when dealing with high-resolution inputs. In contrast, linear attention naturally enjoys linear complexity and has great potential to scale up to higher-resolution images. Nonetheless, the unsatisfactory performance of linear attention greatly limits its practical application in various scenarios. In this paper, we take a step forward to close the gap between the linear and Softmax attention with novel theoretical analyses, which demystify the core factors behind the performance deviations. Specifically, we present two key perspectives to understand and alleviate the limitations of linear attention: the injective property and the local modeling ability. Firstly, we prove that linear attention is not injective, which is prone to assign identical attention weights to different query vectors, thus adding to severe semantic confusion since different queries correspond to the same outputs. Secondly, we confirm that effective local modeling is essential for the success of Softmax attention, in which linear attention falls short. The aforementioned two fundamental differences significantly contribute to the disparities between these two attention paradigms, which is demonstrated by our substantial empirical validation in the paper. In addition, more experiment results indicate that linear attention, as long as endowed with these two properties, can outperform Softmax attention across various tasks while maintaining lower computation complexity. Code is available at https://github.com/LeapLabTHU/InLine. | Bridging the Divide: Reconsidering Softmax and Linear Attention | [
"Dongchen Han",
"Yifan Pu",
"Zhuofan Xia",
"Yizeng Han",
"Xuran Pan",
"Xiu Li",
"Jiwen Lu",
"Shiji Song",
"Gao Huang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RRRyQMn6dv | @inproceedings{
yao2024cosw,
title={Co{SW}: Conditional Sample Weighting for Smoke Segmentation with Label Noise},
author={Lujian Yao and Haitao Zhao and Zhongze Wang and Kaijie Zhao and Jingchao Peng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RRRyQMn6dv}
} | Smoke segmentation is of great importance in precisely identifying the smoke location, enabling timely fire rescue and gas leak detection. However, due to the visual diversity and blurry edges of the non-grid smoke, noisy labels are almost inevitable in large-scale pixel-level smoke datasets. Noisy labels significantly impact the robustness of the model and may lead to serious accidents. Nevertheless, currently, there are no specific methods for addressing noisy labels in smoke segmentation. Smoke differs from regular objects as its transparency varies, causing inconsistent features in the noisy labels. In this paper, we propose a conditional sample weighting (CoSW). CoSW utilizes a multi-prototype framework, where prototypes serve as prior information to apply different weighting criteria to the different feature clusters. A novel regularized within-prototype entropy (RWE) is introduced to achieve CoSW and stable prototype update. The experiments show that our approach achieves SOTA performance on both real-world and synthetic noisy smoke segmentation datasets. | CoSW: Conditional Sample Weighting for Smoke Segmentation with Label Noise | [
"Lujian Yao",
"Haitao Zhao",
"Zhongze Wang",
"Kaijie Zhao",
"Jingchao Peng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RQQGbBqvbL | @inproceedings{
han2024collaborative,
title={Collaborative Refining for Learning from Inaccurate Labels},
author={BIN HAN and Yi-Xuan Sun and Ya-Lin Zhang and Libang Zhang and Haoran Hu and Longfei Li and JUN ZHOU and Guo Ye and HUIMEI HE},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RQQGbBqvbL}
} | This paper considers the problem of learning from multiple sets of inaccurate labels, which can be easily obtained from low-cost annotators, such as rule-based annotators. Previous works typically concentrate on aggregating information from all the annotators, overlooking the significance of data refinement. This paper presents a collaborative refining approach for learning from inaccurate labels. To refine the data, we introduce the annotator agreement as an instrument, which refers to whether multiple annotators agree or disagree on the labels for a given sample. For samples where some annotators disagree, a comparative strategy is proposed to filter noise. Through theoretical analysis, the connections among multiple sets of labels, the respective models trained on them, and the true labels are uncovered to identify relatively reliable labels. For samples where all annotators agree, an aggregating strategy is designed to mitigate potential noise. Guided by theoretical bounds on loss values, a sample selection criterion is introduced and modified to be more robust against potentially problematic values. Through these two methods, all the samples are refined during training, and these refined samples are used to train a lightweight model simultaneously. Extensive experiments are conducted on benchmark and real-world datasets to demonstrate the superiority of our methods. | Collaborative Refining for Learning from Inaccurate Labels | [
"BIN HAN",
"Yi-Xuan Sun",
"Ya-Lin Zhang",
"Libang Zhang",
"Haoran Hu",
"Longfei Li",
"JUN ZHOU",
"Guo Ye",
"HUIMEI HE"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RQCmMSSzvI | @inproceedings{
hoppe2024nonasymptotic,
title={Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning},
author={Frederik Hoppe and Claudio Mayrink Verdun and Hannah Laus and Felix Krahmer and Holger Rauhut},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RQCmMSSzvI}
} | Uncertainty quantification (UQ) is a crucial but challenging task in many high-dimensional learning problems to increase the confidence of a given predictor. We develop a new data-driven approach for UQ in regression that applies both to classical optimization approaches such as the LASSO as well as to neural networks. One of the most notable UQ techniques is the debiased LASSO, which modifies the LASSO to allow for the construction of asymptotic confidence intervals by decomposing the estimation error into a Gaussian and an asymptotically vanishing bias component. However, in real-world problems with finite-dimensional data, the bias term is often too significant to disregard, resulting in overly narrow confidence intervals. Our work rigorously addresses this issue and derives a data-driven adjustment that corrects the confidence intervals for a large class of predictors by estimating the means and variances of the bias terms from training data, exploiting high-dimensional concentration phenomena. This gives rise to non-asymptotic confidence intervals, which can help avoid overestimating certainty in critical applications such as MRI diagnosis. Importantly, our analysis extends beyond sparse regression to data-driven predictors like neural networks, enhancing the reliability of model-based deep learning. Our findings bridge the gap between established theory and the practical applicability of such methods. | Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning | [
"Frederik Hoppe",
"Claudio Mayrink Verdun",
"Hannah Laus",
"Felix Krahmer",
"Holger Rauhut"
] | NeurIPS.cc/2024/Conference | 2407.13666 | [
"https://github.com/frederikhoppe/UQ_high_dim_learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=RPM7STrnVz | @inproceedings{
tian2024videotetris,
title={VideoTetris: Towards Compositional Text-to-Video Generation},
author={Ye Tian and Ling Yang and Haotian Yang and Yuan Gao and Yufan Deng and Xintao Wang and Zhaochen Yu and Xin Tao and Pengfei Wan and Di ZHANG and Bin CUI},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RPM7STrnVz}
} | Diffusion models have demonstrated great success in text-to-video (T2V) generation. However, existing methods may face challenges when handling complex (long) video generation scenarios that involve multiple objects or dynamic changes in object numbers. To address these limitations, we propose VideoTetris, a novel framework that enables compositional T2V generation. Specifically, we propose spatio-temporal compositional diffusion to precisely follow complex textual semantics by manipulating and composing the attention maps of denoising networks spatially and temporally. Moreover, we propose a new dynamic-aware data processing pipeline and a consistency regularization method to enhance the consistency of auto-regressive video generation. Extensive experiments demonstrate that our VideoTetris achieves impressive qualitative and quantitative results in compositional T2V generation. Code is available at: https://github.com/YangLing0818/VideoTetris | VideoTetris: Towards Compositional Text-to-Video Generation | [
"Ye Tian",
"Ling Yang",
"Haotian Yang",
"Yuan Gao",
"Yufan Deng",
"Xintao Wang",
"Zhaochen Yu",
"Xin Tao",
"Pengfei Wan",
"Di ZHANG",
"Bin CUI"
] | NeurIPS.cc/2024/Conference | 2406.04277 | [
"https://github.com/yangling0818/videotetris"
] | https://huggingface.co/papers/2406.04277 | 5 | 23 | 1 | 12 | [
"tyfeld/VideoTetris-long"
] | [] | [] | [
"tyfeld/VideoTetris-long"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RPChapuXlC | @inproceedings{
huang2024lisa,
title={Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack},
author={Tiansheng Huang and Sihao Hu and Fatih Ilhan and Selim Furkan Tekin and Ling Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RPChapuXlC}
} | Recent studies show that Large Language Models (LLMs) with safety alignment can be jail-broken by fine-tuning on a dataset mixed with harmful data. For the first time in the literature, we show that the jail-break effect can be mitigated by separating two states in the fine-tuning stage to respectively optimize over the alignment and user datasets. Unfortunately, our subsequent study shows that this simple Bi-State Optimization (BSO) solution experiences convergence instability when steps invested in its alignment state is too small, leading to downgraded alignment performance. By statistical analysis, we show that the \textit{excess drift} towards the switching iterates of the two states could be a probable reason for the instability. To remedy this issue, we propose \textbf{L}azy(\textbf{i}) \textbf{s}afety \textbf{a}lignment (\textbf{Lisa}), which introduces a proximal term to constraint the drift of each state. Theoretically, the benefit of the proximal term is supported by the convergence analysis, wherein we show that a sufficient large proximal factor is necessary to guarantee Lisa's convergence. Empirically, our results on four downstream fine-tuning tasks show that Lisa with a proximal term can significantly increase alignment performance while maintaining the LLM's accuracy on the user tasks. Code is available at https://github.com/git-disl/Lisa. | Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack | [
"Tiansheng Huang",
"Sihao Hu",
"Fatih Ilhan",
"Selim Furkan Tekin",
"Ling Liu"
] | NeurIPS.cc/2024/Conference | 2405.18641 | [
"https://github.com/git-disl/lisa"
] | https://huggingface.co/papers/2405.18641 | 1 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RNbrIQ0se8 | @inproceedings{
shang2024adamshyper,
title={Ada-{MSH}yper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting},
author={Zongjiang Shang and Ling Chen and Binqing Wu and Dongliang Cui},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RNbrIQ0se8}
} | Although transformer-based methods have achieved great success in multi-scale temporal pattern interaction modeling, two key challenges limit their further development: (1) Individual time points contain less semantic information, and leveraging attention to model pair-wise interactions may cause the information utilization bottleneck. (2) Multiple inherent temporal variations (e.g., rising, falling, and fluctuating) entangled in temporal patterns. To this end, we propose Adaptive Multi-Scale Hypergraph Transformer (Ada-MSHyper) for time series forecasting. Specifically, an adaptive hypergraph learning module is designed to provide foundations for modeling group-wise interactions, then a multi-scale interaction module is introduced to promote more comprehensive pattern interactions at different scales. In addition, a node and hyperedge constraint mechanism is introduced to cluster nodes with similar semantic information and differentiate the temporal variations within each scales. Extensive experiments on 11 real-world datasets demonstrate that Ada-MSHyper achieves state-of-the-art performance, reducing prediction errors by an average of 4.56%, 10.38%, and 4.97% in MSE for long-range, short-range, and ultra-long-range time series forecasting, respectively. Code is available at https://github.com/shangzongjiang/Ada-MSHyper. | Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting | [
"Zongjiang Shang",
"Ling Chen",
"Binqing Wu",
"Dongliang Cui"
] | NeurIPS.cc/2024/Conference | 2410.23992 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RMmgu49lwn | @inproceedings{
wang2024image,
title={Image Understanding Makes for A Good Tokenizer for Image Generation},
author={Luting Wang and Yang Zhao and Zijian Zhang and Jiashi Feng and Si Liu and Bingyi Kang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RMmgu49lwn}
} | Modern image generation (IG) models have been shown to capture rich semantics valuable for image understanding (IU) tasks. However, the potential of IU models to improve IG performance remains uncharted. We address this issue using a token-based IG framework, which relies on effective tokenizers to project images into token sequences. Currently, **pixel reconstruction** (e.g., VQGAN) dominates the training objective for image tokenizers. In contrast, our approach adopts the **feature reconstruction** objective, where tokenizers are trained by distilling knowledge from pretrained IU encoders. Comprehensive comparisons indicate that tokenizers with strong IU capabilities achieve superior IG performance across a variety of metrics, datasets, tasks, and proposal networks. Notably, VQ-KD CLIP achieves $4.10$ FID on ImageNet-1k (IN-1k). Visualization suggests that the superiority of VQ-KD can be partly attributed to the rich semantics within the VQ-KD codebook. We further introduce a straightforward pipeline to directly transform IU encoders into tokenizers, demonstrating exceptional effectiveness for IG tasks. These discoveries may energize further exploration into image tokenizer research and inspire the community to reassess the relationship between IU and IG. The code is released at https://github.com/magic-research/vector_quantization. | Image Understanding Makes for A Good Tokenizer for Image Generation | [
"Luting Wang",
"Yang Zhao",
"Zijian Zhang",
"Jiashi Feng",
"Si Liu",
"Bingyi Kang"
] | NeurIPS.cc/2024/Conference | 2411.04406 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RMfiqfWAWg | @inproceedings{
fan2024on,
title={On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion},
author={Chenghao Fan and Zhenyi Lu and Wei Wei and Jie Tian and Xiaoye Qu and Dangyang Chen and Yu Cheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RMfiqfWAWg}
} | Efficient fine-tuning of large language models for task-specific applications is imperative, yet the vast number of parameters in these models makes their training increasingly challenging.
Despite numerous proposals for effective methods, a substantial memory overhead remains for gradient computations during updates. \thm{Can we fine-tune a series of task-specific small models and transfer their knowledge directly to a much larger model without additional training?}
In this paper, we explore weak-to-strong specialization using logit arithmetic, facilitating a direct answer to this question.
Existing weak-to-strong methods often employ a static knowledge transfer ratio and a single small model for transferring complex knowledge, which leads to suboptimal performance.
To surmount these limitations,
we propose a dynamic logit fusion approach that works with a series of task-specific small models, each specialized in a different task.
This method adaptively allocates weights among these models at each decoding step,
learning the weights through Kullback-Leibler divergence constrained optimization problems.
We conduct extensive experiments across various benchmarks in both single-task and multi-task settings, achieving leading results.
By transferring expertise from the 7B model to the 13B model, our method closes the performance gap by 96.4\% in single-task scenarios and by 86.3\% in multi-task scenarios compared to full fine-tuning of the 13B model. Notably, we achieve surpassing performance on unseen tasks. Moreover, we further demonstrate that our method can effortlessly integrate in-context learning for single tasks and task arithmetic for multi-task scenarios. | On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion | [
"Chenghao Fan",
"Zhenyi Lu",
"Wei Wei",
"Jie Tian",
"Xiaoye Qu",
"Dangyang Chen",
"Yu Cheng"
] | NeurIPS.cc/2024/Conference | 2406.15480 | [
""
] | https://huggingface.co/papers/2406.15480 | 4 | 2 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RMdnTnffou | @inproceedings{
panousis2024coarsetofine,
title={Coarse-to-Fine Concept Bottleneck Models},
author={Konstantinos P. Panousis and Dino Ienco and Diego Marcos},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RMdnTnffou}
} | Deep learning algorithms have recently gained significant attention due to their impressive performance. However, their high complexity and un-interpretable mode of operation hinders their confident deployment in real-world safety-critical tasks. This work targets ante hoc interpretability, and specifically Concept Bottleneck Models (CBMs). Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on two levels of granularity. To this end, we propose a novel two-level concept discovery formulation leveraging: (i) recent advances in vision-language models, and (ii) an innovative formulation for coarse-to-fine concept selection via data-driven and sparsity inducing Bayesian arguments. Within this framework, concept information does not solely rely on the similarity between the whole image and general unstructured concepts; instead, we introduce the notion of concept hierarchy to uncover and exploit more granular concept information residing in patch-specific regions of the image scene. As we experimentally show, the proposed construction not only outperforms recent CBM approaches, but also yields a principled framework towards interpetability. | Coarse-to-Fine Concept Bottleneck Models | [
"Konstantinos P. Panousis",
"Dino Ienco",
"Diego Marcos"
] | NeurIPS.cc/2024/Conference | 2310.02116 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RL4FXrGcTw | @inproceedings{
kr{\"a}mer2024gradients,
title={Gradients of Functions of Large Matrices},
author={Nicholas Kr{\"a}mer and Pablo Moreno-Mu{\~n}oz and Hrittik Roy and S{\o}ren Hauberg},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RL4FXrGcTw}
} | Tuning scientific and probabilistic machine learning models - for example, partial differential equations, Gaussian processes, or Bayesian neural networks - often relies on evaluating functions of matrices whose size grows with the data set or the number of parameters.
While the state-of-the-art for _evaluating_ these quantities is almost always based on Lanczos and Arnoldi iterations, the present work is the first to explain how to _differentiate_ these workhorses of numerical linear algebra efficiently.
To get there, we derive previously unknown adjoint systems for Lanczos and Arnoldi iterations, implement them in JAX, and show that the resulting code can compete with Diffrax when it comes to differentiating PDEs, GPyTorch for selecting Gaussian process models and beats standard factorisation methods for calibrating Bayesian neural networks.
All this is achieved without any problem-specific code optimisation.
Find the code at [link redacted] and install the library with *pip install [redacted]*. | Gradients of Functions of Large Matrices | [
"Nicholas Krämer",
"Pablo Moreno-Muñoz",
"Hrittik Roy",
"Søren Hauberg"
] | NeurIPS.cc/2024/Conference | 2405.17277 | [
"https://github.com/pnkraemer/experiments-lanczos-adjoints"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=RJG8ar4wHA | @inproceedings{
yang2024improving,
title={Improving Generalization of Dynamic Graph Learning via Environment Prompt},
author={Kuo Yang and Zhengyang Zhou and Qihe Huang and Limin Li and Yuxuan Liang and Yang Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RJG8ar4wHA}
} | Out-of-distribution (OOD) generalization issue is a well-known challenge within deep learning tasks. In dynamic graphs, the change of temporal environments is regarded as the main cause of data distribution shift. While numerous OOD studies focusing on environment factors have achieved remarkable performance, they still fail to systematically solve the two issue of environment inference and utilization. In this work, we propose a novel dynamic graph learning model named EpoD based on prompt learning and structural causal model to comprehensively enhance both environment inference and utilization. Inspired by the superior performance of prompt learning in understanding underlying semantic and causal associations, we first design a self-prompted learning mechanism to infer unseen environment factors. We then rethink the role of environment variable within spatio-temporal causal structure model, and introduce a novel causal pathway where dynamic subgraphs serve as mediating variables. The extracted dynamic subgraph can effectively capture the data distribution shift by incorporating the inferred environment variables into the node-wise dependencies. Theoretical discussions and intuitive analysis support the generalizability and interpretability of EpoD. Extensive experiments on seven real-world datasets across domains showcase the superiority of EpoD against baselines, and toy example experiments further verify the powerful interpretability and rationality of our EpoD. | Improving Generalization of Dynamic Graph Learning via Environment Prompt | [
"Kuo Yang",
"Zhengyang Zhou",
"Qihe Huang",
"Limin Li",
"Yuxuan Liang",
"Yang Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RJEC9fZ9Ma | @inproceedings{
yan2024neural,
title={Neural Collapse To Multiple Centers For Imbalanced Data},
author={Hongren Yan and Yuhua Qian and Furong Peng and Jiachen Luo and zheqing zhu and Feijiang Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RJEC9fZ9Ma}
} | Neural Collapse (NC) was a recently discovered phenomenon that the output features and the classifier weights of the neural network converge to optimal geometric structures at the Terminal Phase of Training (TPT) under various losses. However, the relationship between these optimal structures at TPT and the classification performance remains elusive, especially in imbalanced learning. Even though it is noticed that fixing the classifier to an optimal structure can mitigate the minority collapse problem, the performance is still not comparable to the classical imbalanced learning methods with a learnable classifier. In this work, we find that the optimal structure can be designed to represent a better classification rule, and thus achieve better performance. In particular, we justify that, to achieve better classification, the features from the minor classes should align with more directions. This justification then yields a decision rule called the Generalized Classification Rule (GCR) and we also term these directions as the centers of the classes. Then we study the NC under an MSE-type loss via the Unconstrained Features Model (UFM) framework where (1) the features from a class tend to collapse to the mean of the corresponding centers of that class (named Neural Collapse to Multiple Centers (NCMC)) at the global optimum, and (2) the original classifier approximates a surrogate to GCR when NCMC occurs. Based on the analysis, we develop a strategy for determining the number of centers and propose a Cosine Loss function for the fixed classifier that induces NCMC. Our experiments have shown that the Cosine Loss can induce NCMC and has performance on long-tail classification comparable to the classical imbalanced learning methods. | Neural Collapse To Multiple Centers For Imbalanced Data | [
"Hongren Yan",
"Yuhua Qian",
"Furong Peng",
"Jiachen Luo",
"zheqing zhu",
"Feijiang Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RIfgKCknTu | @inproceedings{
tack2024online,
title={Online Adaptation of Language Models with a Memory of Amortized Contexts},
author={Jihoon Tack and Jaehyung Kim and Eric Mitchell and Jinwoo Shin and Yee Whye Teh and Jonathan Richard Schwarz},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RIfgKCknTu}
} | Due to the rapid generation and dissemination of information, large language models (LLMs) quickly run out of date despite enormous development costs. To address the crucial need to keep models updated, online learning has emerged as a critical tool when utilizing LLMs for real-world applications. However, given the ever-expanding corpus of unseen documents and the large parameter space of modern LLMs, efficient adaptation is essential. To address these challenges, we propose \mname, an efficient and effective online adaptation framework for LLMs with strong knowledge retention. We propose a feature extraction and memory-augmentation approach to compress and extract information from new documents into compact modulations stored in a memory bank. When answering questions, our model attends to and extracts relevant knowledge from this memory bank. To learn informative modulations in an efficient manner, we utilize amortization-based meta-learning, which substitutes an otherwise required optimization process with a single forward pass of the encoder. Subsequently, we learn to choose from and aggregate selected documents into a single modulation by conditioning on the question, allowing us to adapt a frozen language model during test time without requiring further gradient updates. Our experiment demonstrates the superiority of \sname in multiple aspects, including online adaptation performance, time, and memory efficiency. In addition, we show how \sname can be combined with and improve the performance of popular alternatives such as retrieval augmented generations (RAGs). Code is available at: https://github.com/jihoontack/MAC. | Online Adaptation of Language Models with a Memory of Amortized Contexts | [
"Jihoon Tack",
"Jaehyung Kim",
"Eric Mitchell",
"Jinwoo Shin",
"Yee Whye Teh",
"Jonathan Richard Schwarz"
] | NeurIPS.cc/2024/Conference | 2403.04317 | [
"https://github.com/jihoontack/mac"
] | https://huggingface.co/papers/2403.04317 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RHQbxlhzhm | @inproceedings{
liu2024fastsurvival,
title={FastSurvival: Hidden Computational Blessings in Training Cox Proportional Hazards Models},
author={Jiachang Liu and Rui Zhang and Cynthia Rudin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RHQbxlhzhm}
} | Survival analysis is an important research topic with applications in healthcare, business, and manufacturing. One essential tool in this area is the Cox proportional hazards (CPH) model, which is widely used for its interpretability, flexibility, and predictive performance. However, for modern data science challenges such as high dimensionality (both $n$ and $p$) and high feature correlations, current algorithms to train the CPH model have drawbacks, preventing us from using the CPH model at its full potential. The root cause is that the current algorithms, based on the Newton method, have trouble converging due to vanishing second order derivatives when outside the local region of the minimizer. To circumvent this problem, we propose new optimization methods by constructing and minimizing surrogate functions that exploit hidden mathematical structures of the CPH model. Our new methods are easy to implement and ensure monotonic loss decrease and global convergence. Empirically, we verify the computational efficiency of our methods. As a direct application, we show how our optimization methods can be used to solve the cardinality-constrained CPH problem, producing very sparse high-quality models that were not previously practical to construct. We list several extensions that our breakthrough enables, including optimization opportunities, theoretical questions on CPH's mathematical structure, as well as other CPH-related applications. | FastSurvival: Hidden Computational Blessings in Training Cox Proportional Hazards Models | [
"Jiachang Liu",
"Rui Zhang",
"Cynthia Rudin"
] | NeurIPS.cc/2024/Conference | 2410.19081 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RH7tfqhiZY | @inproceedings{
mishra2024youdream,
title={YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals},
author={Sandeep Mishra and Oindrila Saha and Alan Bovik},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RH7tfqhiZY}
} | 3D generation guided by text-to-image diffusion models enables the creation of visually compelling assets. However previous methods explore generation based on image or text. The boundaries of creativity are limited by what can be expressed through words or the images that can be sourced. We present YouDream, a method to generate high-quality anatomically controllable animals. YouDream is guided using a text-to-image diffusion model controlled by 2D views of a 3D pose prior. Our method is capable of generating novel imaginary animals that previous text-to-3D generative methods are unable to create. Additionally, our method can preserve anatomic consistency in the generated animals, an area where prior approaches often struggle. Moreover, we design a fully automated pipeline for generating commonly observed animals. To circumvent the need for human intervention to create a 3D pose, we propose a multi-agent LLM that adapts poses from a limited library of animal 3D poses to represent the desired animal. A user study conducted on the outcomes of YouDream demonstrates the preference of the animal models generated by our method over others. Visualizations and code are available at https://youdream3d.github.io/. | YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals | [
"Sandeep Mishra",
"Oindrila Saha",
"Alan Bovik"
] | NeurIPS.cc/2024/Conference | 2406.16273 | [
""
] | https://huggingface.co/papers/2406.16273 | 2 | 40 | 1 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=REVdYKGcfb | @inproceedings{
qin2024what,
title={What Factors Affect Multi-Modal In-Context Learning? An In-Depth Exploration},
author={Libo Qin and Qiguang Chen and Hao Fei and Zhi Chen and Min Li and Wanxiang Che},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=REVdYKGcfb}
} | Recently, rapid advancements in Multi-Modal In-Context Learning (MM-ICL) have achieved notable success, which is capable of achieving superior performance across various tasks without requiring additional parameter tuning. However, the underlying rules for the effectiveness of MM-ICL remain under-explored. To fill this gap, this work aims to investigate the research question: "_What factors affect the performance of MM-ICL?_" To this end, we investigate extensive experiments on the three core steps of MM-ICL including demonstration retrieval, demonstration ordering, and prompt construction using 6 vision large language models and 20 strategies. Our findings highlight (1) the necessity of a multi-modal retriever for demonstration retrieval, (2) the importance of intra-demonstration ordering over inter-demonstration ordering, and (3) the enhancement of task comprehension through introductory instructions in prompts. We hope this study can serve as a foundational guide for optimizing MM-ICL strategies in future research. | What Factors Affect Multi-Modal In-Context Learning? An In-Depth Exploration | [
"Libo Qin",
"Qiguang Chen",
"Hao Fei",
"Zhi Chen",
"Min Li",
"Wanxiang Che"
] | NeurIPS.cc/2024/Conference | 2410.20482 | [
""
] | https://huggingface.co/papers/2410.20482 | 1 | 1 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RERls4Opnm | @inproceedings{
brown2024sampleefficient,
title={Sample-efficient Bayesian Optimisation Using Known Invariances},
author={Theodore Brown and Alexandru Cioba and Ilija Bogunovic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RERls4Opnm}
} | Bayesian optimisation (BO) is a powerful framework for global optimisation of costly functions, using predictions from Gaussian process models (GPs). In this work, we apply BO to functions that exhibit invariance to a known group of transformations. We show that vanilla and constrained BO algorithms are inefficient when optimising such invariant objectives, and provide a method for incorporating group invariances into the kernel of the GP to produce invariance-aware algorithms that achieve significant improvements in sample efficiency. We derive a bound on the maximum information gain of these invariant kernels, and provide novel upper and lower bounds on the number of observations required for invariance-aware BO algorithms to achieve $\epsilon$-optimality. We demonstrate our method's improved performance on a range of synthetic invariant and quasi-invariant functions. We also apply our method in the case where only some of the invariance is incorporated into the kernel, and find that these kernels achieve similar gains in sample efficiency at significantly reduced computational cost. Finally, we use invariant BO to design a current drive system for a nuclear fusion reactor, finding a high-performance solution where non-invariant methods failed. | Sample-efficient Bayesian Optimisation Using Known Invariances | [
"Theodore Brown",
"Alexandru Cioba",
"Ilija Bogunovic"
] | NeurIPS.cc/2024/Conference | 2410.16972 | [
"https://github.com/theo-brown/invariantkernels"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=REIK4SZMJt | @inproceedings{
rooke2024trading,
title={Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes},
author={Spencer Rooke and Zhaoze Wang and Ronald W Di Tullio and Vijay Balasubramanian},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=REIK4SZMJt}
} | Many animals learn cognitive maps of their environment - a simultaneous representation of context, experience, and position. Place cells in the hippocampus, named for their explicit encoding of position, are believed to be a neural substrate of these maps, with place cell "remapping" explaining how this system can represent different contexts. Briefly, place cells alter their firing properties, or "remap", in response to changes in experiential or sensory cues. Substantial sensory changes, produced, e.g., by moving between environments, cause large subpopulations of place cells to change their tuning entirely. While many studies have looked at the physiological basis of remapping, we lack explicit calculations of how the contextual capacity of the place cell system changes as a function of place field firing properties. Here, we propose a geometric approach to understanding population level activity of place cells. Using known firing field statistics, we investigate how changes to place cell firing properties affect the distances between representations of different environments within firing rate space. Using this approach, we find that the number of contexts storable by the hippocampus grows exponentially with the number of place cells, and calculate this exponent for environments of different sizes. We identify a fundamental trade-off between high resolution encoding of position and the number of storable contexts. This trade-off is tuned by place cell width, which might explain the change in firing field scale along the dorsal-ventral axis of the hippocampus. We demonstrate that clustering of place cells near likely points of confusion, such as boundaries, increases the contextual capacity of the place system within our framework and conclude by discussing how our geometric approach could be extended to include other cell types and abstract spaces. | Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes | [
"Spencer Rooke",
"Zhaoze Wang",
"Ronald W Di Tullio",
"Vijay Balasubramanian"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=RE7wPI4vfT | @inproceedings{
wu2024your,
title={Your Diffusion Model is Secretly a Noise Classifier and Benefits from Contrastive Training},
author={Yunshu Wu and Yingtao Luo and Xianghao Kong and Evangelos E. Papalexakis and Greg Ver Steeg},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RE7wPI4vfT}
} | Diffusion models learn to denoise data and the trained denoiser is then used to generate new samples from the data distribution.
In this paper, we revisit the diffusion sampling process and identify a fundamental cause of sample quality degradation: the denoiser is poorly estimated in regions that are far Outside Of the training Distribution (OOD), and the sampling process inevitably evaluates in these OOD regions.
This can become problematic for all sampling methods, especially when we move to parallel sampling which requires us to initialize and update the entire sample trajectory of dynamics in parallel, leading to many OOD evaluations.
To address this problem, we introduce a new self-supervised training objective that differentiates the levels of noise added to a sample, leading to improved OOD denoising performance. The approach is based on our observation that diffusion models implicitly define a log-likelihood ratio that distinguishes distributions with different amounts of noise, and this expression depends on denoiser performance outside the standard training distribution.
We show by diverse experiments that the proposed contrastive diffusion training is effective for both sequential and parallel settings, and it improves the performance and speed of parallel samplers significantly. Code for our paper can be found at https://github.com/yunshuwu/ContrastiveDiffusionLoss | Your Diffusion Model is Secretly a Noise Classifier and Benefits from Contrastive Training | [
"Yunshu Wu",
"Yingtao Luo",
"Xianghao Kong",
"Evangelos E. Papalexakis",
"Greg Ver Steeg"
] | NeurIPS.cc/2024/Conference | 2407.08946 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RE5LSV8QYH | @inproceedings{
richardson2024qualitative,
title={Qualitative Mechanism Independence},
author={Oliver Ethan Richardson and Spencer J Peters and Joseph Halpern},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RE5LSV8QYH}
} | We define what it means for a joint probability distribution to be compatible with aset of independent causal mechanisms, at a qualitative level—or, more precisely with a directed hypergraph $\mathcal A$, which is the qualitative structure of a probabilistic dependency graph (PDG). When A represents a qualitative Bayesian network, QIM-compatibility with $\mathcal A$ reduces to satisfying the appropriate conditional independencies. But giving semantics to hypergraphs using QIM-compatibility lets us do much more. For one thing, we can capture functional dependencies. For another, we can capture important aspects of causality using compatibility: we can use compatibility to understand cyclic causal graphs, and to demonstrate structural compatibility, we must essentially produce a causal model. Finally, compatibility has deep connections to information theory. Applying compatibility to cyclic structures helps to clarify a longstanding conceptual issue in information theory. | Qualitative Mechanism Independence | [
"Oliver Ethan Richardson",
"Spencer J Peters",
"Joseph Halpern"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RDsDvSHGkA | @inproceedings{
melnychuk2024quantifying,
title={Quantifying Aleatoric Uncertainty of the Treatment Effect: A Novel Orthogonal Learner},
author={Valentyn Melnychuk and Stefan Feuerriegel and Mihaela van der Schaar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RDsDvSHGkA}
} | Estimating causal quantities from observational data is crucial for understanding the safety and effectiveness of medical treatments. However, to make reliable inferences, medical practitioners require not only estimating averaged causal quantities, such as the conditional average treatment effect, but also understanding the randomness of the treatment effect as a random variable. This randomness is referred to as aleatoric uncertainty and is necessary for understanding the probability of benefit from treatment or quantiles of the treatment effect. Yet, the aleatoric uncertainty of the treatment effect has received surprisingly little attention in the causal machine learning community. To fill this gap, we aim to quantify the aleatoric uncertainty of the treatment effect at the covariate-conditional level, namely, the conditional distribution of the treatment effect (CDTE). Unlike average causal quantities, the CDTE is not point identifiable without strong additional assumptions. As a remedy, we employ partial identification to obtain sharp bounds on the CDTE and thereby quantify the aleatoric uncertainty of the treatment effect. We then develop a novel, orthogonal learner for the bounds on the CDTE, which we call AU-learner. We further show that our AU-learner has several strengths in that it satisfies Neyman-orthogonality and is doubly robust. Finally, we propose a fully-parametric deep learning instantiation of our AU-learner. | Quantifying Aleatoric Uncertainty of the Treatment Effect: A Novel Orthogonal Learner | [
"Valentyn Melnychuk",
"Stefan Feuerriegel",
"Mihaela van der Schaar"
] | NeurIPS.cc/2024/Conference | 2411.03387 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=RCO9fRP8AJ | @inproceedings{
yang2024imovd,
title={Im{OV}3D: Learning Open Vocabulary Point Clouds 3D Object Detection from Only 2D Images},
author={Timing Yang and Yuanliang Ju and Li Yi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RCO9fRP8AJ}
} | Open-vocabulary 3D object detection (OV-3Det) aims to generalize beyond the limited number of base categories labeled during the training phase. The biggest bottleneck is the scarcity of annotated 3D data, whereas 2D image datasets are abundant and richly annotated. Consequently, it is intuitive to leverage the wealth of annotations in 2D images to alleviate the inherent data scarcity in OV-3Det. In this paper, we push the task setup to its limits by exploring the potential of using solely 2D images to learn OV-3Det. The major challenges for this setup is the modality gap between training images and testing point clouds, which prevents effective integration of 2D knowledge into OV-3Det. To address this challenge, we propose a novel framework ImOV3D to leverage pseudo multimodal representation containing both images and point clouds (PC) to close the modality gap. The key of ImOV3D lies in flexible modality conversion where 2D images can be lifted into 3D using monocular depth estimation and can also be derived from 3D scenes through rendering. This allows unifying both training images and testing point clouds into a common image-PC representation, encompassing a wealth of 2D semantic information and also incorporating the depth and structural characteristics of 3D spatial data. We carefully conduct such conversion to minimize the domain gap between training and test cases. Extensive experiments on two benchmark datasets, SUNRGBD and ScanNet, show that ImOV3D significantly outperforms existing methods, even in the absence of ground truth 3D training data. With the inclusion of a minimal amount of real 3D data for fine-tuning, the performance also significantly surpasses previous state-of-the-art. Codes and pre-trained models are released on the https://github.com/yangtiming/ImOV3D. | ImOV3D: Learning Open Vocabulary Point Clouds 3D Object Detection from Only 2D Images | [
"Timing Yang",
"Yuanliang Ju",
"Li Yi"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/yangtiming/imov3d"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RB1F2h5YEx | @inproceedings{
chung2024parseval,
title={Parseval Regularization for Continual Reinforcement Learning},
author={Wesley Chung and Lynn Cherif and Doina Precup and David Meger},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RB1F2h5YEx}
} | Plasticity loss, trainability loss, and primacy bias have been identified as issues arising when training deep neural networks on sequences of tasks---referring to the increased difficulty in training on new tasks.
We propose to use Parseval regularization, which maintains orthogonality of weight matrices, to preserve useful optimization properties and improve training in a continual reinforcement learning setting.
We show that it provides significant benefits to RL agents on a suite of gridworld, CARL and MetaWorld tasks.
We conduct comprehensive ablations to identify the source of its benefits and investigate the effect of certain metrics associated to network trainability including weight matrix rank, weight norms and policy entropy. | Parseval Regularization for Continual Reinforcement Learning | [
"Wesley Chung",
"Lynn Cherif",
"Doina Precup",
"David Meger"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RA6rzOJ2zI | @inproceedings{
ullah2024navigating,
title={Navigating Extremes: Dynamic Sparsity in Large Output Spaces},
author={Nasib Ullah and Erik Schultheis and Mike Lasby and Yani Ioannou and Rohit Babbar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=RA6rzOJ2zI}
} | In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning
for generating efficient models. In principle, DST allows for a much more memory efficient training process,
as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this. Because sparse matrix multiplication is much less efficient than dense matrix multiplication on GPUs, most
implementations simulate sparsity by masking weights.
In this paper, we leverage recent advances in semi-structured sparse training to apply DST in the domain of classification
with large output spaces, where memory-efficiency is paramount. With a label space of possibly millions of candidates,
the classification layer alone will consume several gigabytes of memory. Switching from a dense to a fixed fan-in
sparse layer updated with sparse evolutionary training (SET); however, severely hampers training convergence, especially
at the largest label spaces. We find that the gradients fed back from the classifier into the text encoder make it
much more difficult to learn good input representations, despite using a dense encoder.
By employing an intermediate layer or adding an auxiliary training objective, we recover most of the generalisation performance of the dense model.
Overall, we demonstrate the applicability of DST in a challenging domain, characterized by a highly skewed label distribution,
that lies outside of DST's typical benchmark datasets, and enable end-to-end training with millions of labels on commodity hardware. | Navigating Extremes: Dynamic Sparsity in Large Output Spaces | [
"Nasib Ullah",
"Erik Schultheis",
"Mike Lasby",
"Yani Ioannou",
"Rohit Babbar"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=R8znYRjxj3 | @inproceedings{
maillard2024bayesoptimal,
title={Bayes-optimal learning of an extensive-width neural network from quadratically many samples},
author={Antoine Maillard and Emanuele Troiani and Simon Martin and Florent Krzakala and Lenka Zdeborova},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R8znYRjxj3}
} | We consider the problem of learning a target function corresponding to a single
hidden layer neural network, with a quadratic activation function after the first layer,
and random weights. We consider the asymptotic limit where the input dimension
and the network width are proportionally large. Recent work [Cui et al., 2023]
established that linear regression provides Bayes-optimal test error to learn such
a function when the number of available samples is only linear in the dimension.
That work stressed the open challenge of theoretically analyzing the optimal test
error in the more interesting regime where the number of samples is quadratic in
the dimension. In this paper, we solve this challenge for quadratic activations and
derive a closed-form expression for the Bayes-optimal test error. We also provide an
algorithm, that we call GAMP-RIE, which combines approximate message passing
with rotationally invariant matrix denoising, and that asymptotically achieves the
optimal performance. Technically, our result is enabled by establishing a link
with recent works on optimal denoising of extensive-rank matrices and on the
ellipsoid fitting problem. We further show empirically that, in the absence of
noise, randomly-initialized gradient descent seems to sample the space of weights,
leading to zero training loss, and averaging over initialization leads to a test error
equal to the Bayes-optimal one. | Bayes-optimal learning of an extensive-width neural network from quadratically many samples | [
"Antoine Maillard",
"Emanuele Troiani",
"Simon Martin",
"Florent Krzakala",
"Lenka Zdeborova"
] | NeurIPS.cc/2024/Conference | 2408.03733 | [
"https://github.com/SPOC-group/ExtensiveWidthQuadraticSamples"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=R8mfn3rHd5 | @inproceedings{
zhang2024realcompo,
title={RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models},
author={Xinchen Zhang and Ling Yang and YaQi Cai and Zhaochen Yu and Kai-Ni Wang and xie jiake and Ye Tian and Minkai Xu and Yong Tang and Yujiu Yang and Bin CUI},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R8mfn3rHd5}
} | Diffusion models have achieved remarkable advancements in text-to-image generation. However, existing models still have many difficulties when faced with multiple-object compositional generation. In this paper, we propose ***RealCompo***, a new *training-free* and *transferred-friendly* text-to-image generation framework, which aims to leverage the respective advantages of text-to-image models and spatial-aware image diffusion models (e.g., layout, keypoints and segmentation maps) to enhance both realism and compositionality of the generated images. An intuitive and novel *balancer* is proposed to dynamically balance the strengths of the two models in denoising process, allowing plug-and-play use of any model without extra training. Extensive experiments show that our RealCompo consistently outperforms state-of-the-art text-to-image models and spatial-aware image diffusion models in multiple-object compositional generation while keeping satisfactory realism and compositionality of the generated images. Notably, our RealCompo can be seamlessly extended with a wide range of spatial-aware image diffusion models and stylized diffusion models. Code is available at: https://github.com/YangLing0818/RealCompo | RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models | [
"Xinchen Zhang",
"Ling Yang",
"YaQi Cai",
"Zhaochen Yu",
"Kai-Ni Wang",
"xie jiake",
"Ye Tian",
"Minkai Xu",
"Yong Tang",
"Yujiu Yang",
"Bin CUI"
] | NeurIPS.cc/2024/Conference | 2402.12908 | [
"https://github.com/yangling0818/realcompo"
] | https://huggingface.co/papers/2402.12908 | 5 | 9 | 1 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=R8SolCx62K | @inproceedings{
he2024exploitation,
title={Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering},
author={Dongxiao He and Lianze Shan and Jitao Zhao and Hengrui Zhang and Zhen Wang and Weixiong Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R8SolCx62K}
} | Graph Contrastive Learning (GCL) has emerged as a powerful approach for generating graph representations without the need for manual annotation. Most advanced GCL methods fall into three main frameworks: node discrimination, group discrimination, and bootstrapping schemes, all of which achieve comparable performance. However, the underlying mechanisms and factors that contribute to their effectiveness are not yet fully understood. In this paper, we revisit these frameworks and reveal a common mechanism—representation scattering—that significantly enhances their performance. Our discovery highlights an essential feature of GCL and unifies these seemingly disparate methods under the concept of representation scattering. To leverage this insight, we introduce Scattering Graph Representation Learning (SGRL), a novel framework that incorporates a new representation scattering mechanism designed to enhance representation diversity through a center-away strategy. Additionally, consider the interconnected nature of graphs, we develop a topology-based constraint mechanism that integrates graph structural properties with representation scattering to prevent excessive scattering. We extensively evaluate SGRL across various downstream tasks on benchmark datasets, demonstrating its efficacy and superiority over existing GCL methods. Our findings underscore the significance of representation scattering in GCL and provide a structured framework for harnessing this mechanism to advance graph representation learning. The code of SGRL is at https://github.com/hedongxiao-tju/SGRL. | Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering | [
"Dongxiao He",
"Lianze Shan",
"Jitao Zhao",
"Hengrui Zhang",
"Zhen Wang",
"Weixiong Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=R7w68Z5iqf | @inproceedings{
guo2024parameter,
title={Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts},
author={Hang Guo and Tao Dai and Yuanchao Bai and Bin Chen and Xudong Ren and Zexuan Zhu and Shu-Tao Xia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R7w68Z5iqf}
} | Designing single-task image restoration models for specific degradation has seen great success in recent years. To achieve generalized image restoration, all-in-one methods have recently been proposed and shown potential for multiple restoration tasks using one single model. Despite the promising results, the existing all-in-one paradigm still suffers from high computational costs as well as limited generalization on unseen degradations. In this work, we introduce an alternative solution to improve the generalization of image restoration models. Drawing inspiration from recent advancements in Parameter Efficient Transfer Learning (PETL), we aim to tune only a small number of parameters to adapt pre-trained restoration models to various tasks. However, current PETL methods fail to generalize across varied restoration tasks due to their homogeneous representation nature. To this end, we propose AdaptIR, a Mixture-of-Experts (MoE) with orthogonal multi-branch design to capture local spatial, global spatial, and channel representation bases, followed by adaptive base combination to obtain heterogeneous representation for different degradations. Extensive experiments demonstrate that our AdaptIR achieves stable performance on single-degradation tasks, and excels in hybrid-degradation tasks, with training only 0.6% parameters for 8 hours. | Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts | [
"Hang Guo",
"Tao Dai",
"Yuanchao Bai",
"Bin Chen",
"Xudong Ren",
"Zexuan Zhu",
"Shu-Tao Xia"
] | NeurIPS.cc/2024/Conference | 2312.08881 | [
"https://github.com/csguoh/adaptir"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=R6N9AGyz13 | @inproceedings{
wang2024parallelizing,
title={Parallelizing Model-based Reinforcement Learning Over the Sequence Length},
author={ZiRui Wang and Yue DENG and Junfeng Long and Yin Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R6N9AGyz13}
} | Recently, Model-based Reinforcement Learning (MBRL) methods have demonstrated stunning sample efficiency in various RL domains.
However, achieving this extraordinary sample efficiency comes with additional training costs in terms of computations, memory, and training time.
To address these challenges, we propose the **Pa**rallelized **Mo**del-based **R**einforcement **L**earning (**PaMoRL**) framework.
PaMoRL introduces two novel techniques: the **P**arallel **W**orld **M**odel (**PWM**) and the **P**arallelized **E**ligibility **T**race **E**stimation (**PETE**) to parallelize both model learning and policy learning stages of current MBRL methods over the sequence length.
Our PaMoRL framework is hardware-efficient and stable, and it can be applied to various tasks with discrete or continuous action spaces using a single set of hyperparameters.
The empirical results demonstrate that the PWM and PETE within PaMoRL significantly increase training speed without sacrificing inference efficiency.
In terms of sample efficiency, PaMoRL maintains an MBRL-level sample efficiency that outperforms other no-look-ahead MBRL methods and model-free RL methods, and it even exceeds the performance of planning-based MBRL methods and methods with larger networks in certain tasks. | Parallelizing Model-based Reinforcement Learning Over the Sequence Length | [
"ZiRui Wang",
"Yue DENG",
"Junfeng Long",
"Yin Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=R6FOuWv5MD | @inproceedings{
handina2024understanding,
title={Understanding Model Selection for Learning in Strategic Environments},
author={Tinashe Handina and Eric Mazumdar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R6FOuWv5MD}
} | The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model class one optimizes over—and the more data one has access to—the more one can improve performance. As models get deployed in a variety of real-world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects the relationship between performance at equilibrium and the expressivity of model classes. We find that strategic interactions can break the conventional view—meaning that performance does not necessarily monotonically improve as model classes get larger or more expressive (even with infinite data). We show the implications of this result in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning. In particular, we show that each of these settings admits a Braess' paradox-like phenomenon in which optimizing over less expressive model classes allows one to achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game. | Understanding Model Selection for Learning in Strategic Environments | [
"Tinashe Handina",
"Eric Mazumdar"
] | NeurIPS.cc/2024/Conference | 2402.07588 | [
""
] | https://huggingface.co/papers/2402.07588 | 1 | 1 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=R4IBZrSF5d | @inproceedings{
cui2024virtual,
title={Virtual Scanning: Unsupervised Non-line-of-sight Imaging from Irregularly Undersampled Transients},
author={Xingyu Cui and Huanjing Yue and Song Li and Xiangjun Yin and Yusen Hou and Yun Meng and Kai Zou and Xiaolong Hu and Jingyu Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R4IBZrSF5d}
} | Non-line-of-sight (NLOS) imaging allows for seeing hidden scenes around corners through active sensing.
Most previous algorithms for NLOS reconstruction require dense transients acquired through regular scans over a large relay surface, which limits their applicability in realistic scenarios with irregular relay surfaces.
In this paper, we propose an unsupervised learning-based framework for NLOS imaging from irregularly undersampled transients~(IUT).
Our method learns implicit priors from noisy irregularly undersampled transients without requiring paired data, which is difficult and expensive to acquire and align.
To overcome the ambiguity of the measurement consistency constraint in inferring the albedo volume, we design a virtual scanning process that enables the network to learn within both range and null spaces for high-quality reconstruction.
We devise a physics-guided SURE-based denoiser to enhance robustness to ubiquitous noise in low-photon imaging conditions.
Extensive experiments on both simulated and real-world data validate the performance and generalization of our method.
Compared with the state-of-the-art (SOTA) method, our method achieves higher fidelity, greater robustness, and remarkably faster inference times by orders of magnitude.
The code and model are available at https://github.com/XingyuCuii/Virtual-Scanning-NLOS. | Virtual Scanning: Unsupervised Non-line-of-sight Imaging from Irregularly Undersampled Transients | [
"Xingyu Cui",
"Huanjing Yue",
"Song Li",
"Xiangjun Yin",
"Yusen Hou",
"Yun Meng",
"Kai Zou",
"Xiaolong Hu",
"Jingyu Yang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=R46HGlIjcG | @inproceedings{
wang2024localizing,
title={Localizing Memorization in {SSL} Vision Encoders},
author={Wenhao Wang and Adam Dziedzic and Michael Backes and Franziska Boenisch},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R46HGlIjcG}
} | Recent work on studying memorization in self-supervised learning (SSL) suggests that even though SSL encoders are trained on millions of images, they still memorize individual data points. While effort has been put into characterizing the memorized data and linking encoder memorization to downstream utility, little is known about where the memorization happens inside SSL encoders. To close this gap, we propose two metrics for localizing memorization in SSL encoders on a per-layer (LayerMem) and per-unit basis (UnitMem). Our localization methods are independent of the downstream task, do not require any label information, and can be performed in a forward pass. By localizing memorization in various encoder architectures (convolutional and transformer-based) trained on diverse datasets with contrastive and non-contrastive SSL frameworks, we find that (1) while SSL memorization increases with layer depth, highly memorizing units are distributed across the entire encoder, (2) a significant fraction of units in SSL encoders experiences surprisingly high memorization of individual data points, which is in contrast to models trained under supervision, (3) atypical (or outlier) data points cause much higher layer and unit memorization than standard data points, and (4) in vision transformers, most memorization happens in the fully-connected layers. Finally, we show that localizing memorization in SSL has the potential to improve fine-tuning and to inform pruning strategies. | Localizing Memorization in SSL Vision Encoders | [
"Wenhao Wang",
"Adam Dziedzic",
"Michael Backes",
"Franziska Boenisch"
] | NeurIPS.cc/2024/Conference | 2409.19069 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=R3ruv1gF8R | @inproceedings{
li2024the,
title={The Reliability of {OKR}idge Method in Solving Sparse Ridge Regression Problems},
author={Xiyuan Li and Youjun Wang and Weiwei Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R3ruv1gF8R}
} | Sparse ridge regression problems play a significant role across various domains. To solve sparse ridge regression, Liu et al. (2023) recently propose an advanced algorithm, Scalable Optimal $K$-Sparse Ridge Regression (OKRidge), which is both faster and more accurate than existing approaches. However, the absence of theoretical analysis on the error of OKRidge impedes its large-scale applications. In this paper, we reframe the estimation error of OKRidge as a Primary Optimization ($\textbf{PO}$) problem and employ the Convex Gaussian min-max theorem (CGMT) to simplify the $\textbf{PO}$ problem into an Auxiliary Optimization ($\textbf{AO}$) problem. Subsequently, we provide a theoretical error analysis for OKRidge based on the $\textbf{AO}$ problem. This error analysis improves the theoretical reliability of OKRidge. We also conduct experiments to verify our theorems and the results are in excellent agreement with our theoretical findings. | The Reliability of OKRidge Method in Solving Sparse Ridge Regression Problems | [
"Xiyuan Li",
"Youjun Wang",
"Weiwei Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=R1Rrb2d5BH | @inproceedings{
lei2024ezhoi,
title={{EZ}-{HOI}: {VLM} Adaptation via Guided Prompt Learning for Zero-Shot {HOI} Detection},
author={Qinqian Lei and Bo Wang and Robby T. Tan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R1Rrb2d5BH}
} | Detecting Human-Object Interactions (HOI) in zero-shot settings, where models must handle unseen classes, poses significant challenges. Existing methods that rely on aligning visual encoders with large Vision-Language Models (VLMs) to tap into the extensive knowledge of VLMs, require large, computationally expensive models and encounter training difficulties. Adapting VLMs with prompt learning offers an alternative to direct alignment. However, fine-tuning on task-specific datasets often leads to overfitting to seen classes and suboptimal performance on unseen classes, due to the absence of unseen class labels. To address these challenges, we introduce a novel prompt learning-based framework for Efficient Zero-Shot HOI detection (EZ-HOI). First, we introduce Large Language Model (LLM) and VLM guidance for learnable prompts, integrating detailed HOI descriptions and visual semantics to adapt VLMs to HOI tasks. However, because training datasets contain seen-class labels alone, fine-tuning VLMs on such datasets tends to optimize learnable prompts for seen classes instead of unseen ones. Therefore, we design prompt learning for unseen classes using information from related seen classes, with LLMs utilized to highlight the differences between unseen and related seen classes. Quantitative evaluations on benchmark datasets demonstrate that our EZ-HOI achieves state-of-the-art performance across various zero-shot settings with only 10.35\% to 33.95\% of the trainable parameters compared to existing methods. Code is available at https://github.com/ChelsieLei/EZ-HOI. | EZ-HOI: VLM Adaptation via Guided Prompt Learning for Zero-Shot HOI Detection | [
"Qinqian Lei",
"Bo Wang",
"Robby T. Tan"
] | NeurIPS.cc/2024/Conference | 2410.23904 | [
"https://github.com/chelsielei/ez-hoi"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=R0bnWrpIeN | @inproceedings{
kopf2024cosy,
title={CoSy: Evaluating Textual Explanations of Neurons},
author={Laura Kopf and Philine Lou Bommer and Anna Hedstr{\"o}m and Sebastian Lapuschkin and Marina MC H{\"o}hne and Kirill Bykov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=R0bnWrpIeN}
} | A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within their latent representations. While methods exist to connect neurons to human-understandable textual descriptions, evaluating the quality of these explanations is challenging due to the lack of a unified quantitative approach. We introduce CoSy (Concept Synthesis), a novel, architecture-agnostic framework for evaluating textual explanations of latent neurons. Given textual explanations, our proposed framework uses a generative model conditioned on textual input to create data points representing the explanations. By comparing the neuron's response to these generated data points and control data points, we can estimate the quality of the explanation. We validate our framework through sanity checks and benchmark various neuron description methods for Computer Vision tasks, revealing significant differences in quality. | CoSy: Evaluating Textual Explanations of Neurons | [
"Laura Kopf",
"Philine Lou Bommer",
"Anna Hedström",
"Sebastian Lapuschkin",
"Marina MC Höhne",
"Kirill Bykov"
] | NeurIPS.cc/2024/Conference | 2405.20331 | [
"https://github.com/lkopf/cosy"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QzvWyggrYB | @inproceedings{
kapoor2024large,
title={Large Language Models Must Be Taught to Know What They Don{\textquoteright}t Know},
author={Sanyam Kapoor and Nate Gruver and Manley Roberts and Katherine M. Collins and Arka Pal and Umang Bhatt and Adrian Weller and Samuel Dooley and Micah Goldblum and Andrew Gordon Wilson},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QzvWyggrYB}
} | When using large language models (LLMs) in high-stakes applications, we need to know when we can trust their predictions. Some works argue that prompting high-performance LLMs is sufficient to produce calibrated uncertainties, while others introduce sampling methods that can be prohibitively expensive. In this work, we first argue that prompting on its own is insufficient to achieve good calibration and then show that fine-tuning on a small dataset of correct and incorrect answers can create an uncertainty estimate with good generalization and small computational overhead. We show that a thousand graded examples are sufficient to outperform baseline methods and that training through the features of a model is necessary for good performance and tractable for large open-source models when using LoRA. We also investigate the mechanisms that enable reliable LLM uncertainty estimation, finding that many models can be used as general-purpose uncertainty estimators, applicable not just to their own uncertainties but also the uncertainty of other models. Lastly, we show that uncertainty estimates inform human use of LLMs in human-AI collaborative settings through a user study. | Large Language Models Must Be Taught to Know What They Don’t Know | [
"Sanyam Kapoor",
"Nate Gruver",
"Manley Roberts",
"Katherine M. Collins",
"Arka Pal",
"Umang Bhatt",
"Adrian Weller",
"Samuel Dooley",
"Micah Goldblum",
"Andrew Gordon Wilson"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Qz7BfmWizk | @inproceedings{
zuo2024the,
title={The motion planning neural circuit in goal-directed navigation as Lie group operator search},
author={Junfeng Zuo and Ying Nian Wu and Si Wu and Wenhao Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Qz7BfmWizk}
} | The information processing in the brain and embodied agents form a sensory-action loop to interact with the world. An important step in the loop is motion planning which selects motor actions based on the current world state and task need. In goal-directed navigation, the brain chooses and generates motor actions to bring the current state into the goal state. It is unclear about the neural circuit mechanism of motor action selection, nor its underlying theory. The present study formulates the motion planning as a Lie group operator search problem, and uses the 1D rotation group as an example to provide insight into general operator search in neural circuits. We found the abstract group operator search can be implemented by a two-layer feedforward circuit utilizing circuit motifs of connection phase shift, nonlinear activation function, and pooling, similar to Drosophila's goal-directed navigation neural circuits. And the computational complexity of the feedforward circuit can be even lower than common signal processing algorithms in certain conditions. We also provide geometric interpretations of circuit computation in the group representation space. The feedforward motion planning circuit is further combined with sensory and motor circuit modules into a full circuit of the sensory-action loop implementing goal-directed navigation. Our work for the first time links the abstract operator search with biological neural circuits. | The motion planning neural circuit in goal-directed navigation as Lie group operator search | [
"Junfeng Zuo",
"Ying Nian Wu",
"Si Wu",
"Wenhao Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QyxE3W9Yni | @inproceedings{
wu2024faster,
title={Faster Differentially Private Top-\$k\$ Selection: A Joint Exponential Mechanism with Pruning},
author={Hao WU and Hanwen Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QyxE3W9Yni}
} | We study the differentially private top-$k$ selection problem, aiming to identify a sequence of $k$ items with approximately the highest scores from $d$ items. Recent work by Gillenwater et al. (2022) employs a direct sampling approach from the vast collection of $O(d^k)$ possible length-$k$ sequences, showing superior empirical accuracy compared to previous pure or approximate differentially private methods. Their algorithm has a time and space complexity of $\tilde{O}(dk)$.
In this paper, we present an improved algorithm that achieves time and space complexity of $\tilde{O}(d + k^2)$.
Experimental results show that our algorithm runs orders of magnitude faster than their approach, while achieving similar empirical accuracy. | Faster Differentially Private Top-k Selection: A Joint Exponential Mechanism with Pruning | [
"Hao WU",
"Hanwen Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QyR1dNDxRP | @inproceedings{
harel2024provable,
title={Provable Tempered Overfitting of Minimal Nets and Typical Nets},
author={Itamar Harel and William M. Hoza and Gal Vardi and Itay Evron and Nathan Srebro and Daniel Soudry},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QyR1dNDxRP}
} | We study the overfitting behavior of fully connected deep Neural Networks (NNs) with binary weights fitted to perfectly classify a noisy training set. We consider interpolation using both the smallest NN (having the minimal number of weights) and a random interpolating NN. For both learning rules, we prove overfitting is tempered. Our analysis rests on a new bound on the size of a threshold circuit consistent with a partial function. To the best of our knowledge, ours are the first theoretical results on benign or tempered overfitting that: (1) apply to deep NNs, and (2) do not require a very high or very low input dimension. | Provable Tempered Overfitting of Minimal Nets and Typical Nets | [
"Itamar Harel",
"William M. Hoza",
"Gal Vardi",
"Itay Evron",
"Nathan Srebro",
"Daniel Soudry"
] | NeurIPS.cc/2024/Conference | 2410.19092 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QvqLdeSLWA | @inproceedings{
meng2024suppress,
title={Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques},
author={Benyuan Meng and Qianqian Xu and Zitai Wang and Zhiyong Yang and Xiaochun Cao and Qingming Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QvqLdeSLWA}
} | Diffusion models are powerful generative models, and this capability can also be applied to discrimination. The inner activations of a pre-trained diffusion model can serve as features for discriminative tasks, namely, diffusion feature. We discover that diffusion feature has been hindered by a hidden yet universal phenomenon that we call content shift. To be specific, there are content differences between features and the input image, such as the exact shape of a certain object. We locate the cause of content shift as one inherent characteristic of diffusion models, which suggests the broad existence of this phenomenon in diffusion feature. Further empirical study also indicates that its negative impact is not negligible even when content shift is not visually perceivable. Hence, we propose to suppress content shift to enhance the overall quality of diffusion features. Specifically, content shift is related to the information drift during the process of recovering an image from the noisy input, pointing out the possibility of turning off-the-shelf generation techniques into tools for content shift suppression. We further propose a practical guideline named GATE to efficiently evaluate the potential benefit of a technique and provide an implementation of our methodology. Despite the simplicity, the proposed approach has achieved superior results on various tasks and datasets, validating its potential as a generic booster for diffusion features. Our code is available at https://github.com/Darkbblue/diffusion-content-shift. | Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques | [
"Benyuan Meng",
"Qianqian Xu",
"Zitai Wang",
"Zhiyong Yang",
"Xiaochun Cao",
"Qingming Huang"
] | NeurIPS.cc/2024/Conference | 2410.06719 | [
"https://github.com/Darkbblue/diffusion-content-shift"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Qtf6Xz4VvE | @inproceedings{
bachtis2024cascade,
title={Cascade of phase transitions in the training of energy-based models},
author={Dimitrios Bachtis and Giulio Biroli and Aur{\'e}lien Decelle and Beatriz Seoane},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Qtf6Xz4VvE}
} | In this paper, we investigate the feature encoding process in a prototypical energy-based generative model, the Restricted Boltzmann Machine (RBM). We start with an analytical investigation using simplified architectures and data structures, and end with numerical analysis of real trainings on real datasets. Our study tracks the evolution of the model’s weight matrix through its singular value decomposition, revealing a series of thermodynamic phase transitions that shape the principal learning modes of the empirical probability distribution. We first describe this process analytically in several controlled setups that allow us to fully monitor the training dynamics until convergence. We then validate these findings by training the Bernoulli-Bernoulli RBM on real data sets. By studying the phase behavior over data sets of increasing dimension, we show that these phase transitions are genuine in the thermodynamic sense. Moreover, we propose a mean-field finite-size scaling hypothesis, confirming that the initial phase transition, reminiscent of the paramagnetic-to-ferromagnetic phase transition in mean-field ferromagnetism models, is governed by mean-field critical exponents. | Cascade of phase transitions in the training of energy-based models | [
"Dimitrios Bachtis",
"Giulio Biroli",
"Aurélien Decelle",
"Beatriz Seoane"
] | NeurIPS.cc/2024/Conference | 2405.14689 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QtYg4g3Deu | @inproceedings{
wu2024graphmetro,
title={Graph{METRO}: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts},
author={Shirley Wu and Kaidi Cao and Bruno Ribeiro and James Zou and Jure Leskovec},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QtYg4g3Deu}
} | Graph data are inherently complex and heterogeneous, leading to a high natural diversity of distributional shifts. However, it remains unclear how to build machine learning architectures that generalize to the complex distributional shifts naturally occurring in the real world. Here, we develop GraphMETRO, a Graph Neural Network architecture that models natural diversity and captures complex distributional shifts. GraphMETRO employs a Mixture-of-Experts (MoE) architecture with a gating model and multiple expert models, where each expert model targets a specific distributional shift to produce a referential representation w.r.t. a reference model, and the gating model identifies shift components. Additionally, we design a novel objective that aligns the representations from different expert models to ensure reliable optimization. GraphMETRO achieves state-of-the-art results on four datasets from the GOOD benchmark, which is comprised of complex and natural real-world distribution shifts, improving by 67% and 4.2% on the WebKB and Twitch datasets. Code and data are available at https://github.com/Wuyxin/GraphMETRO. | GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts | [
"Shirley Wu",
"Kaidi Cao",
"Bruno Ribeiro",
"James Zou",
"Jure Leskovec"
] | NeurIPS.cc/2024/Conference | 2312.04693 | [
"https://github.com/wuyxin/graphmetro"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QrE9QPq4ya | @inproceedings{
ni2024phyrecon,
title={PhyRecon: Physically Plausible Neural Scene Reconstruction},
author={Junfeng Ni and Yixin Chen and Bohan Jing and Nan Jiang and Bin Wang and Bo Dai and Puhao Li and Yixin Zhu and Song-Chun Zhu and Siyuan Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QrE9QPq4ya}
} | We address the issue of physical implausibility in multi-view neural reconstruction. While implicit representations have gained popularity in multi-view 3D reconstruction, previous work struggles to yield physically plausible results, limiting their utility in domains requiring rigorous physical accuracy. This lack of plausibility stems from the absence of physics modeling in existing methods and their inability to recover intricate geometrical structures. In this paper, we introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations. PHYRECON features a novel differentiable particle-based physical simulator built on neural implicit representations. Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points via our proposed Surface Points Marching Cubes (SP-MC), enabling differentiable learning with both rendering and physical losses. Additionally, PHYRECON models both rendering and physical uncertainty to identify and compensate for inconsistent and inaccurate monocular geometric priors. The physical uncertainty further facilitates physics-guided pixel sampling to enhance the learning of slender structures. By integrating these techniques, our model supports differentiable joint modeling of appearance, geometry, and physics. Extensive experiments demonstrate that PHYRECON significantly improves the reconstruction quality. Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets, paving the way for future physics-based applications. | PhyRecon: Physically Plausible Neural Scene Reconstruction | [
"Junfeng Ni",
"Yixin Chen",
"Bohan Jing",
"Nan Jiang",
"Bin Wang",
"Bo Dai",
"Puhao Li",
"Yixin Zhu",
"Song-Chun Zhu",
"Siyuan Huang"
] | NeurIPS.cc/2024/Conference | 2404.16666 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QpKWFLtZKi | @inproceedings{
wang2024rethinking,
title={Rethinking Exploration in Reinforcement Learning with Effective Metric-Based Exploration Bonus},
author={Yiming Wang and Kaiyan Zhao and Furui Liu and Leong Hou U},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QpKWFLtZKi}
} | Enhancing exploration in reinforcement learning (RL) through the incorporation of intrinsic rewards, specifically by leveraging *state discrepancy* measures within various metric spaces as exploration bonuses, has emerged as a prevalent strategy to encourage agents to visit novel states. The critical factor lies in how to quantify the difference between adjacent states as *novelty* for promoting effective exploration.
Nonetheless, existing methods that evaluate state discrepancy in the latent space under $L_1$ or $L_2$ norm often depend on count-based episodic terms as scaling factors for exploration bonuses, significantly limiting their scalability. Additionally, methods that utilize the bisimulation metric for evaluating state discrepancies face a theory-practice gap due to improper approximations in metric learning, particularly struggling with *hard exploration* tasks. To overcome these challenges, we introduce the **E**ffective **M**etric-based **E**xploration-bonus (EME). EME critically examines and addresses the inherent limitations and approximation inaccuracies of current metric-based state discrepancy methods for exploration, proposing a robust metric for state discrepancy evaluation backed by comprehensive theoretical analysis. Furthermore, we propose the diversity-enhanced scaling factor integrated into the exploration bonus to be dynamically adjusted by the variance of prediction from an ensemble of reward models, thereby enhancing exploration effectiveness in particularly challenging scenarios.
Extensive experiments are conducted on hard exploration tasks within Atari games, Minigrid, Robosuite, and Habitat, which illustrate our method's scalability to various scenarios. The project website can be found at https://sites.google.com/view/effective-metric-exploration. | Rethinking Exploration in Reinforcement Learning with Effective Metric-Based Exploration Bonus | [
"Yiming Wang",
"Kaiyan Zhao",
"Furui Liu",
"Leong Hou U"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=QoWf3lo6m7 | @inproceedings{
zhu2024towards,
title={Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics},
author={Hanlin Zhu and Baihe Huang and Shaolun Zhang and Michael Jordan and Jiantao Jiao and Yuandong Tian and Stuart Russell},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QoWf3lo6m7}
} | Auto-regressive large language models (LLMs) show impressive capacities to solve many complex reasoning tasks while struggling with some simple logical reasoning tasks such as inverse search: when trained on ''$A \to B$'' (e.g., *Tom is the parent of John*), LLM fails to directly conclude ''$B \gets A$'' (e.g., *John is the child of Tom*) during inference even if the two sentences are semantically identical, which is known as the ''reversal curse''. In this paper, we theoretically analyze the reversal curse via the training dynamics of (stochastic) gradient descent for two auto-regressive models: (1) a bilinear model that can be viewed as a simplification of a one-layer transformer; (2) one-layer transformers under certain assumptions. Our analysis reveals that for both models, the reversal curse is a consequence of the (effective) model weights *asymmetry*, i.e., the increase of weights from a token $A$ to token $B$ during training does not necessarily cause the increase of the weights from $B$ to $A$, which is caused by the training dynamics under certain choice of loss function and the optimization space of model parameters. Moreover, our analysis can be naturally applied to other logical reasoning tasks such as chain-of-thought (COT), which provides a new perspective different from previous work that focuses on expressivity. Finally, we conduct experiments to validate our theory on multi-layer transformers under different settings. Our code is available at [https://github.com/marlo-z/reversal_curse_analysis/](https://github.com/marlo-z/reversal_curse_analysis/). | Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics | [
"Hanlin Zhu",
"Baihe Huang",
"Shaolun Zhang",
"Michael Jordan",
"Jiantao Jiao",
"Yuandong Tian",
"Stuart Russell"
] | NeurIPS.cc/2024/Conference | 2405.04669 | [
"https://github.com/marlo-z/reversal_curse_analysis"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Qk3IBHyv6z | @inproceedings{
tang2024multiagent,
title={Multi-Agent Imitation Learning: Value is Easy, Regret is Hard},
author={Jingwu Tang and Gokul Swamy and Fei Fang and Steven Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Qk3IBHyv6z}
} | We study a multi-agent imitation learning (MAIL) problem where we take the perspective of a learner attempting to *coordinate* a group of agents based on demonstrations of an expert doing so. Most prior work in MAIL essentially reduces the problem to matching the behavior of the expert *within* the support of the demonstrations. While doing so is sufficient to drive the *value gap* between the learner and the expert to zero under the assumption that agents are non-strategic, it does not guarantee robustness to deviations by strategic agents. Intuitively, this is because strategic deviations can depend on a counterfactual quantity: the coordinator's recommendations outside of the state distribution their recommendations induce. In response, we initiate the study of an alternative objective for MAIL in Markov Games we term the *regret gap* that explicitly accounts for potential deviations by agents in the group. We first perform an in-depth exploration of the relationship between the value and regret gaps. First, we show that while the value gap can be efficiently minimized via a direct extension of single-agent IL algorithms, even *value equivalence* can lead to an arbitrarily large regret gap. This implies that achieving regret equivalence is harder than achieving value equivalence in MAIL. We then provide a pair of efficient reductions to no-regret online convex optimization that are capable of minimizing the regret gap *(a)* under a coverage assumption on the expert (MALICE) or *(b)* with access to a queryable expert (BLADES). | Multi-Agent Imitation Learning: Value is Easy, Regret is Hard | [
"Jingwu Tang",
"Gokul Swamy",
"Fei Fang",
"Steven Wu"
] | NeurIPS.cc/2024/Conference | 2406.04219 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QiCJomIW3l | @inproceedings{
li2024toward,
title={Toward Dynamic Non-Line-of-Sight Imaging with Mamba Enforced Temporal Consistency},
author={Yue Li and Yi Sun and Shida Sun and Juntian Ye and Yueyi Zhang and Feihu Xu and Zhiwei Xiong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QiCJomIW3l}
} | Dynamic reconstruction in confocal non-line-of-sight imaging encounters great challenges since the dense raster-scanning manner limits the practical frame rate. A fewer pioneer works reconstruct high-resolution volumes from the under-scanning transient measurements but overlook temporal consistency among transient frames. To fully exploit multi-frame information, we propose the first spatial-temporal Mamba (ST-Mamba) based method tailored for dynamic reconstruction of transient videos. Our method capitalizes on neighbouring transient frames to aggregate the target 3D hidden volume. Specifically, the interleaved features extracted from the input transient frames are fed to the proposed ST-Mamba blocks, which leverage the time-resolving causality in transient measurement. The cross ST-Mamba blocks are then devised to integrate the adjacent transient features. The target high-resolution transient frame is subsequently recovered by the transient spreading module. After transient fusion and recovery, a physical-based network is employed to reconstruct the hidden volume. To tackle the substantial noise inherent in transient videos, we propose a wave-based loss function to impose constraints within the phasor field. Besides, we introduce a new dataset, comprising synthetic videos for training and real-world videos for evaluation. Extensive experiments showcase the superior performance of our method on both synthetic data and real world data captured by different imaging setups. The code and data are available at https://github.com/Depth2World/Dynamic_NLOS. | Toward Dynamic Non-Line-of-Sight Imaging with Mamba Enforced Temporal Consistency | [
"Yue Li",
"Yi Sun",
"Shida Sun",
"Juntian Ye",
"Yueyi Zhang",
"Feihu Xu",
"Zhiwei Xiong"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QhUXU2ilIG | @inproceedings{
liu2024physicsconstrained,
title={Physics-Constrained Comprehensive Optical Neural Networks},
author={Yanbing Liu and Jianwei Qin and Yan Liu and Xi Yue and Xun Liu and Guoqing Wang and Tianyu Li and Fangwei Ye and Wei Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QhUXU2ilIG}
} | With the advantages of low latency, low power consumption, and high parallelism, optical neural networks (ONN) offer a promising solution for time-sensitive and resource-limited artificial intelligence applications. However, the performance of the ONN model is often diminished by the gap between the ideal simulated system and the actual physical system. To bridge the gap, this work conducts extensive experiments to investigate systematic errors in the optical physical system within the context of image classification tasks. Through our investigation, two quantifiable errors—light source instability and exposure time mismatches—significantly impact the prediction performance of ONN. To address these systematic errors, a physics-constrained ONN learning framework is constructed, including a well designed loss function to mitigate the effect of light fluctuations, a CCD adjustment strategy to alleviate the effects of exposure time mismatches and a ’physics-prior based’ error compensation network to manage other systematic errors, ensuring consistent light intensity across experimental results and simulations. In our experiments, the proposed method achieved a test classification accuracy of 96.5% on the MNIST dataset, a substantial improvement over the 61.6% achieved with the original ONN. For the more challenging QuickDraw16 and Fashion MNIST datasets, experimental accuracy improved from 63.0% to 85.7% and from 56.2% to 77.5%, respectively. Moreover, the comparison results further demonstrate the effectiveness of the proposed physics-constrained ONN learning framework over state-of-the-art ONN approaches. This lays the groundwork for more robust and precise optical computing applications. | Physics-Constrained Comprehensive Optical Neural Networks | [
"Yanbing Liu",
"Jianwei Qin",
"Yan Liu",
"Xi Yue",
"Xun Liu",
"Guoqing Wang",
"Tianyu Li",
"Fangwei Ye",
"Wei Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QhRemVrZbG | @inproceedings{
peng2024live,
title={{LIVE}: Learnable In-Context Vector for Visual Question Answering},
author={Yingzhe Peng and chenduo hao and Xinting Hu and Jiawei Peng and Xin Geng and Xu Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QhRemVrZbG}
} | As language models continue to scale, Large Language Models (LLMs) have exhibited emerging capabilities in In-Context Learning (ICL), enabling them to solve language tasks by prefixing a few in-context demonstrations (ICDs) as context. Inspired by these advancements, researchers have extended these techniques to develop Large Multimodal Models (LMMs) with ICL capabilities. However, applying ICL usually faces two major challenges: 1) using more ICDs will largely increase the inference time and 2) the performance is sensitive to the selection of ICDs. These challenges are further exacerbated in LMMs due to the integration of multiple data types and the combinational complexity of multimodal ICDs. Recently, to address these challenges, some NLP studies introduce non-learnable In-Context Vectors (ICVs) which extract useful task information from ICDs into a single vector and then insert it into the LLM to help solve the corresponding task. However, although useful in simple NLP tasks, these non-learnable methods fail to handle complex multimodal tasks like Visual Question Answering (VQA). In this study, we propose \underline{\textbf{L}}earnable \underline{\textbf{I}}n-Context \underline{\textbf{Ve}}ctor (LIVE) to distill essential task information from demonstrations, improving ICL performance in LMMs. Experiments show that LIVE can significantly reduce computational costs while enhancing accuracy in VQA tasks compared to traditional ICL and other non-learnable ICV methods. | LIVE: Learnable In-Context Vector for Visual Question Answering | [
"Yingzhe Peng",
"chenduo hao",
"Xinting Hu",
"Jiawei Peng",
"Xin Geng",
"Xu Yang"
] | NeurIPS.cc/2024/Conference | 2406.13185 | [
"https://github.com/forjadeforest/live-learnable-in-context-vector"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QgaGs7peYe | @inproceedings{
chung2024predicting,
title={Predicting Future Actions of Reinforcement Learning Agents},
author={Stephen Chung and Scott Niekum and David Krueger},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QgaGs7peYe}
} | As reinforcement learning agents become increasingly deployed in real-world scenarios, predicting future agent actions and events during deployment is important for facilitating better human-agent interaction and preventing catastrophic outcomes. This paper experimentally evaluates and compares the effectiveness of future action and event prediction for three types of RL agents: explicitly planning, implicitly planning, and non-planning. We employ two approaches: the inner state approach, which involves predicting based on the inner computations of the agents (e.g., plans or neuron activations), and a simulation-based approach, which involves unrolling the agent in a learned world model. Our results show that the plans of explicitly planning agents are significantly more informative for prediction than the neuron activations of the other types. Furthermore, using internal plans proves more robust to model quality compared to simulation-based approaches when predicting actions, while the results for event prediction are more mixed. These findings highlight the benefits of leveraging inner states and simulations to predict future agent actions and events, thereby improving interaction and safety in real-world deployments. | Predicting Future Actions of Reinforcement Learning Agents | [
"Stephen Chung",
"Scott Niekum",
"David Krueger"
] | NeurIPS.cc/2024/Conference | 2410.22459 | [
"https://github.com/stephen-chung-mh/predict_action"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QgMC8ftbNd | @inproceedings{
altabaa2024on,
title={On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games},
author={Awni Altabaa and Zhuoran Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QgMC8ftbNd}
} | In sequential decision-making problems, the *information structure* describes the causal dependencies between system variables, encompassing the dynamics of the environment and the agents' actions. Classical models of reinforcement learning (e.g., MDPs, POMDPs) assume a restricted and highly regular information structure, while more general models like predictive state representations do not explicitly model the information structure. By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables, requiring a rich and flexible representation of information structure. In this paper, we formalize a novel reinforcement learning model which explicitly represents the information structure.
We then use this model to carry out an information-structural analysis of the statistical complexity of general sequential decision-making problems, obtaining a characterization via a graph-theoretic quantity of the DAG representation of the information structure. We prove an upper bound on the sample complexity of learning a general sequential decision-making problem in terms of its information structure by exhibiting an algorithm achieving the upper bound. This recovers known tractability results and gives a novel perspective on reinforcement learning in general sequential decision-making problems, providing a systematic way of identifying new tractable classes of problems. | On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games | [
"Awni Altabaa",
"Zhuoran Yang"
] | NeurIPS.cc/2024/Conference | 2403.00993 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QeWibaTmnn | @inproceedings{
wei2024grasp,
title={Grasp as You Say: Language-guided Dexterous Grasp Generation},
author={Yi-Lin Wei and Jian-Jian Jiang and Chengyi Xing and Xiantuo Tan and Xiao-Ming Wu and Hao Li and Mark Cutkosky and Wei-Shi Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QeWibaTmnn}
} | This paper explores a novel task "Dexterous Grasp as You Say'' (DexGYS), enabling robots to perform dexterous grasping based on human commands expressed in natural language. However, the development of this field is hindered by the lack of datasets with natural human guidance; thus, we propose a language-guided dexterous grasp dataset, named DexGYSNet, offering high-quality dexterous grasp annotations along with flexible and fine-grained human language guidance. Our dataset construction is cost-efficient, with the carefully-design hand-object interaction retargeting strategy, and the LLM-assisted language guidance annotation system. Equipped with this dataset, we introduce the DexGYSGrasp framework for generating dexterous grasps based on human language instructions, with the capability of producing grasps that are intent-aligned, high quality and diversity. To achieve this capability, our framework decomposes the complex learning process into two manageable progressive objectives and introduce two components to realize them. The first component learns the grasp distribution focusing on intention alignment and generation diversity. And the second component refines the grasp quality while maintaining intention consistency. Extensive experiments are conducted on DexGYSNet and real world environments for validation. | Grasp as You Say: Language-guided Dexterous Grasp Generation | [
"Yi-Lin Wei",
"Jian-Jian Jiang",
"Chengyi Xing",
"Xiantuo Tan",
"Xiao-Ming Wu",
"Hao Li",
"Mark Cutkosky",
"Wei-Shi Zheng"
] | NeurIPS.cc/2024/Conference | 2405.19291 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Qe2BKeCEBC | @inproceedings{
xu2024hybrid,
title={Hybrid Mamba for Few-Shot Segmentation},
author={Qianxiong Xu and Xuanyi Liu and Lanyun Zhu and Guosheng Lin and Cheng Long and Ziyue Li and Rui Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Qe2BKeCEBC}
} | Many few-shot segmentation (FSS) methods use cross attention to fuse support foreground (FG) into query features, regardless of the quadratic complexity. A recent advance Mamba can also well capture intra-sequence dependencies, yet the complexity is only linear. Hence, we aim to devise a cross (attention-like) Mamba to capture inter-sequence dependencies for FSS. A simple idea is to scan on support features to selectively compress them into the hidden state, which is then used as the initial hidden state to sequentially scan query features. Nevertheless, it suffers from (1) support forgetting issue: query features will also gradually be compressed when scanning on them, so the support features in hidden state keep reducing, and many query pixels cannot fuse sufficient support features; (2) intra-class gap issue: query FG is essentially more similar to itself rather than to support FG, i.e., query may prefer not to fuse support features but their own ones from the hidden state, yet the success of FSS relies on the effective use of support information. To tackle them, we design a hybrid Mamba network (HMNet), including (1) a support recapped Mamba to periodically recap the support features when scanning query, so the hidden state can always contain rich support information; (2) a query intercepted Mamba to forbid the mutual interactions among query pixels, and encourage them to fuse more support features from the hidden state. Consequently, the support information is better utilized, leading to better performance. Extensive experiments have been conducted on two public benchmarks, showing the superiority of HMNet. The code is available at https://github.com/Sam1224/HMNet. | Hybrid Mamba for Few-Shot Segmentation | [
"Qianxiong Xu",
"Xuanyi Liu",
"Lanyun Zhu",
"Guosheng Lin",
"Cheng Long",
"Ziyue Li",
"Rui Zhao"
] | NeurIPS.cc/2024/Conference | 2409.19613 | [
"https://github.com/sam1224/hmnet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QbsPz0SnyV | @inproceedings{
yang2024facilitating,
title={Facilitating Multimodal Classification via Dynamically Learning Modality Gap},
author={Yang Yang and Fengqiang Wan and Qing-Yuan Jiang and Yi Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QbsPz0SnyV}
} | Multimodal learning falls into the trap of the optimization dilemma due to the modality imbalance phenomenon, leading to unsatisfactory performance in real applications. A core reason for modality imbalance is that the models of each modality converge at different rates. Many attempts naturally focus on adjusting learning procedures adaptively. Essentially, the reason why models converge at different rates is because the difficulty of fitting category labels is inconsistent for each modality during learning. From the perspective of fitting labels, we find that appropriate positive intervention label fitting can correct this difference in learning ability. By exploiting the ability of contrastive learning to intervene in the learning of category label fitting, we propose a novel multimodal learning approach that dynamically integrates unsupervised contrastive learning and supervised multimodal learning to address the modality imbalance problem. We find that a simple yet heuristic integration strategy can significantly alleviate the modality imbalance phenomenon. Moreover, we design a learning-based integration strategy to integrate two losses dynamically, further improving the performance. Experiments on widely used datasets demonstrate the superiority of our method compared with state-of-the-art (SOTA) multimodal learning approaches. The code is available at https://github.com/njustkmg/NeurIPS24-LFM. | Facilitating Multimodal Classification via Dynamically Learning Modality Gap | [
"Yang Yang",
"Fengqiang Wan",
"Qing-Yuan Jiang",
"Yi Xu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QbqLcwMXfF | @inproceedings{
zhang2024selective,
title={Selective Attention: Enhancing Transformer through Principled Context Control},
author={Xuechen Zhang and Xiangyu Chang and Mingchen Li and Amit Roy-Chowdhury and Jiasi Chen and Samet Oymak},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QbqLcwMXfF}
} | The attention mechanism within the transformer architecture enables the model to weigh and combine tokens based on their relevance to the query. While self-attention has enjoyed major success, it notably treats all queries $q$ in the same way by applying the mapping $V^\top\text{softmax}(Kq)$, where $V,K$ are the value and key embeddings respectively. In this work, we argue that this uniform treatment hinders the ability to control contextual sparsity and relevance. As a solution, we introduce the Selective Self-Attention (SSA) layer that augments the softmax nonlinearity with a principled temperature scaling strategy. By controlling temperature, SSA adapts the contextual sparsity of the attention map to the query embedding and its position in the context window. Through theory and experiments, we demonstrate that this alleviates attention dilution, aids the optimization process, and enhances the model's ability to control softmax spikiness of individual queries. We also incorporate temperature scaling for value embeddings and show that it boosts the model's ability to suppress irrelevant/noisy tokens. Notably, SSA is a lightweight method which introduces less than 0.5\% new parameters through a weight-sharing strategy and can be fine-tuned on existing LLMs. Extensive empirical evaluations demonstrate that SSA-equipped models achieve a noticeable and consistent accuracy improvement on language modeling benchmarks. | Selective Attention: Enhancing Transformer through Principled Context Control | [
"Xuechen Zhang",
"Xiangyu Chang",
"Mingchen Li",
"Amit Roy-Chowdhury",
"Jiasi Chen",
"Samet Oymak"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QbPHYPZKJI | @inproceedings{
sorrenson2024learning,
title={Learning Distributions on Manifolds with Free-Form Flows},
author={Peter Sorrenson and Felix Draxler and Armand Rousselot and Sander Hummerich and Ullrich Koethe},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=QbPHYPZKJI}
} | We propose Manifold Free-Form Flows (M-FFF), a simple new generative model for data on manifolds. The existing approaches to learning a distribution on arbitrary manifolds are expensive at inference time, since sampling requires solving a differential equation. Our method overcomes this limitation by sampling in a single function evaluation. The key innovation is to optimize a neural network via maximum likelihood on the manifold, possible by adapting the free-form flow framework to Riemannian manifolds. M-FFF is straightforwardly adapted to any manifold with a known projection. It consistently matches or outperforms previous single-step methods specialized to specific manifolds. It is typically two orders of magnitude faster than multi-step methods based on diffusion or flow matching, achieving better likelihoods in several experiments. We provide our code at https://github.com/vislearn/FFF. | Learning Distributions on Manifolds with Free-Form Flows | [
"Peter Sorrenson",
"Felix Draxler",
"Armand Rousselot",
"Sander Hummerich",
"Ullrich Koethe"
] | NeurIPS.cc/2024/Conference | 2312.09852 | [
"https://github.com/vislearn/fff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.