bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=oTv6Qa12G0
@inproceedings{ luyten2024a, title={A theoretical design of concept sets: improving the predictability of concept bottleneck models}, author={Max Ruiz Luyten and Mihaela van der Schaar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oTv6Qa12G0} }
Concept-based learning, a promising approach in machine learning, emphasizes the value of high-level representations called concepts. However, despite growing interest in concept-bottleneck models (CBMs), there is a lack of clear understanding regarding the properties of concept sets and their impact on model performance. In this work, we define concepts within the machine learning context, highlighting their core properties: 'expressiveness' and 'model-aware inductive bias', and we make explicit the underlying assumption of CBMs. We establish theoretical results for concept-bottleneck models (CBMs), revealing how these properties guide the design of concept sets that optimize model performance. Specifically, we demonstrate that well-chosen concept sets can improve sample efficiency and out-of-distribution robustness in the appropriate regimes. Based on these insights, we propose a method to effectively identify informative and non-redundant concepts. We validate our approach with experiments on CIFAR-10 and MetaShift, showing that concept-bottleneck models outperform the foundational embedding counterpart, particularly in low-data regimes and under distribution shifts. We also examine failure modes and discuss how they can be tackled.
A theoretical design of concept sets: improving the predictability of concept bottleneck models
[ "Max Ruiz Luyten", "Mihaela van der Schaar" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oTZYhOAMhX
@inproceedings{ liu2024identify, title={Identify Then Recommend: Towards Unsupervised Group Recommendation}, author={Yue Liu and Shihao Zhu and Tianyuan Yang and Jian Ma and Wenliang Zhong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oTZYhOAMhX} }
Group Recommendation (GR), which aims to recommend items to groups of users, has become a promising and practical direction for recommendation systems. This paper points out two issues of the state-of-the-art GR models. (1) The pre-defined and fixed number of user groups is inadequate for real-time industrial recommendation systems, where the group distribution can shift dynamically. (2) The training schema of existing GR methods is supervised, necessitating expensive user-group and group-item labels, leading to significant annotation costs. To this end, we present a novel unsupervised group recommendation framework named $\underline{\text{I}}$dentify $\underline{\text{T}}$hen $\underline{\text{R}}$ecommend ($\underline{\text{ITR}}$), where it first identifies the user groups in an unsupervised manner even without the pre-defined number of groups, and then two pre-text tasks are designed to conduct self-supervised group recommendation. Concretely, at the group identification stage, we first estimate the adaptive density of each user point, where areas with higher densities are more likely to be recognized as group centers. Then, a heuristic merge-and-split strategy is designed to discover the user groups and decision boundaries. Subsequently, at the self-supervised learning stage, the pull-and-repulsion pre-text task is proposed to optimize the user-group distribution. Besides, the pseudo group recommendation pre-text task is designed to assist the recommendations. Extensive experiments demonstrate the superiority and effectiveness of ITR on both user recommendation (e.g., 22.22\% NDCG@5 $\uparrow$) and group recommendation (e.g., 22.95\% NDCG@5 $\uparrow$). Furthermore, we deploy ITR on the industrial recommender and achieve promising results.
Identify Then Recommend: Towards Unsupervised Group Recommendation
[ "Yue Liu", "Shihao Zhu", "Tianyuan Yang", "Jian Ma", "Wenliang Zhong" ]
NeurIPS.cc/2024/Conference
2410.23757
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oTEttMIymz
@inproceedings{ han2024binocularguided, title={Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis}, author={Liang Han and Junsheng Zhou and Yu-Shen Liu and Zhizhong Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oTEttMIymz} }
Novel view synthesis from sparse inputs is a vital yet challenging task in 3D computer vision. Previous methods explore 3D Gaussian Splatting with neural priors (e.g. depth priors) as an additional supervision, demonstrating promising quality and efficiency compared to the NeRF based methods. However, the neural priors from 2D pretrained models are often noisy and blurry, which struggle to precisely guide the learning of radiance fields. In this paper, We propose a novel method for synthesizing novel views from sparse views with Gaussian Splatting that does not require external prior as supervision. Our key idea lies in exploring the self-supervisions inherent in the binocular stereo consistency between each pair of binocular images constructed with disparity-guided image warping. To this end, we additionally introduce a Gaussian opacity constraint which regularizes the Gaussian locations and avoids Gaussian redundancy for improving the robustness and efficiency of inferring 3D Gaussians from sparse views. Extensive experiments on the LLFF, DTU, and Blender datasets demonstrate that our method significantly outperforms the state-of-the-art methods.
Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis
[ "Liang Han", "Junsheng Zhou", "Yu-Shen Liu", "Zhizhong Han" ]
NeurIPS.cc/2024/Conference
2410.18822
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oSOVME9kl2
@inproceedings{ li2024implicit, title={Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems}, author={Bingcong Li and Liang Zhang and Niao He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oSOVME9kl2} }
Sharpness-aware minimization (SAM) improves generalization of various deep learning tasks. Motivated by popular architectures such as LoRA, we explore the implicit regularization of SAM for scale-invariant problems involving two groups of variables. Instead of focusing on commonly used sharpness, this work introduces a concept termed *balancedness*, defined as the difference between the squared norm of two variables. This allows us to depict richer global behaviors of SAM. In particular, our theoretical and empirical findings reveal that i) SAM promotes balancedness; and ii) the regularization on balancedness is *data-responsive* -- outliers have stronger impact. The latter coincides with empirical observations that SAM outperforms SGD in the presence of outliers. Leveraging the implicit regularization, we develop a resource-efficient SAM variant, balancedness-aware regularization (BAR), tailored for scale-invariant problems such as finetuning language models with LoRA. BAR saves 95% computational overhead of SAM, with enhanced test performance across various tasks on RoBERTa, GPT2, and OPT-1.3B.
Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems
[ "Bingcong Li", "Liang Zhang", "Niao He" ]
NeurIPS.cc/2024/Conference
2410.14802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oQ1Zj9iH88
@inproceedings{ chen2024penaltybased, title={Penalty-based Methods for Simple Bilevel Optimization under H\"olderian Error Bounds}, author={Pengyu Chen and Xu Shi and Rujun Jiang and Jiulin Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oQ1Zj9iH88} }
This paper investigates simple bilevel optimization problems where we minimize a convex upper-level objective over the optimal solution set of a convex lower-level objective. Existing methods for such problems either only guarantee asymptotic convergence, have slow sublinear rates, or require strong assumptions. To address these challenges, we propose a penalization framework that delineates the relationship between approximate solutions of the original problem and its reformulated counterparts. This framework accommodates varying assumptions regarding smoothness and convexity, enabling the application of specific methods with different complexity results. Specifically, when both upper- and lower-level objectives are composite convex functions, under an $\alpha$-Hölderian error bound condition and certain mild assumptions, our algorithm attains an $(\epsilon,\epsilon^{\beta})$-optimal solution of the original problem for any $\beta> 0$ within $\mathcal{O}\left(\sqrt{{1}/{\epsilon^{\max\\{\alpha,\beta\\}}}}\right)$ iterations. The result can be improved further if the smooth part of the upper-level objective is strongly convex. We also establish complexity results when the upper- and lower-level objectives are general nonsmooth functions. Numerical experiments demonstrate the effectiveness of our algorithms.
Penalty-based Methods for Simple Bilevel Optimization under Hölderian Error Bounds
[ "Pengyu Chen", "Xu Shi", "Rujun Jiang", "Jiulin Wang" ]
NeurIPS.cc/2024/Conference
2402.02155
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oPvBnPTbQv
@inproceedings{ wang2024referencing, title={Referencing Where to Focus: Improving Visual Grounding with Referential Query}, author={Yabing Wang and Zhuotao Tian and Qingpei Guo and Zheng Qin and Sanping Zhou and Ming Yang and Le Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oPvBnPTbQv} }
Visual Grounding aims to localize the referring object in an image given a natural language expression. Recent advancements in DETR-based visual grounding methods have attracted considerable attention, as they directly predict the coordinates of the target object without relying on additional efforts, such as pre-generated proposal candidates or pre-defined anchor boxes. However, existing research primarily focuses on designing stronger multi-modal decoder, which typically generates learnable queries by random initialization or by using linguistic embeddings. This vanilla query generation approach inevitably increases the learning difficulty for the model, as it does not involve any target-related information at the beginning of decoding. Furthermore, they only use the deepest image feature during the query learning process, overlooking the importance of features from other levels. To address these issues, we propose a novel approach, called RefFormer. It consists of the query adaption module that can be seamlessly integrated into CLIP and generate the referential query to provide the prior context for decoder, along with a task-specific decoder. By incorporating the referential query into the decoder, we can effectively mitigate the learning difficulty of the decoder, and accurately concentrate on the target object. Additionally, our proposed query adaption module can also act as an adapter, preserving the rich knowledge within CLIP without the need to tune the parameters of the backbone network. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method, outperforming state-of-the-art approaches on five visual grounding benchmarks.
Referencing Where to Focus: Improving Visual Grounding with Referential Query
[ "Yabing Wang", "Zhuotao Tian", "Qingpei Guo", "Zheng Qin", "Sanping Zhou", "Ming Yang", "Le Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oPFjhl6DpR
@inproceedings{ gu2024enhancing, title={Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation}, author={Shangding Gu and Laixi Shi and Yuhao Ding and Alois Knoll and Costas Spanos and Adam Wierman and Ming Jin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oPFjhl6DpR} }
Safe reinforcement learning (RL) is crucial for deploying RL agents in real-world applications, as it aims to maximize long-term rewards while satisfying safety constraints. However, safe RL often suffers from sample inefficiency, requiring extensive interactions with the environment to learn a safe policy. We propose Efficient Safe Policy Optimization (ESPO), a novel approach that enhances the efficiency of safe RL through sample manipulation. ESPO employs an optimization framework with three modes: maximizing rewards, minimizing costs, and balancing the trade-off between the two. By dynamically adjusting the sampling process based on the observed conflict between reward and safety gradients, ESPO theoretically guarantees convergence, optimization stability, and improved sample complexity bounds. Experiments on the Safety-MuJoCo and Omnisafe benchmarks demonstrate that ESPO significantly outperforms existing primal-based and primal-dual-based baselines in terms of reward maximization and constraint satisfaction. Moreover, ESPO achieves substantial gains in sample efficiency, requiring 25--29\% fewer samples than baselines, and reduces training time by 21--38\%.
Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation
[ "Shangding Gu", "Laixi Shi", "Yuhao Ding", "Alois Knoll", "Costas Spanos", "Adam Wierman", "Ming Jin" ]
NeurIPS.cc/2024/Conference
2405.20860
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oNMnR0NJ2e
@inproceedings{ qin2024a, title={A Label is Worth A Thousand Images in Dataset Distillation}, author={Tian Qin and Zhiwei Deng and David Alvarez-Melis}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oNMnR0NJ2e} }
Data *quality* is a crucial factor in the performance of machine learning models, a principle that dataset distillation methods exploit by compressing training datasets into much smaller counterparts that maintain similar downstream performance. Understanding how and why data distillation methods work is vital not only for improving these methods but also for revealing fundamental characteristics of "good” training data. However, a major challenge in achieving this goal is the observation that distillation approaches, which rely on sophisticated but mostly disparate methods to generate synthetic data, have little in common with each other. In this work, we highlight a largely overlooked aspect common to most of these methods: the use of soft (probabilistic) labels. Through a series of ablation experiments, we study the role of soft labels in depth. Our results reveal that the main factor explaining the performance of state-of-the-art distillation methods is not the specific techniques used to generate synthetic data but rather the use of soft labels. Furthermore, we demonstrate that not all soft labels are created equal; they must contain *structured information* to be beneficial. We also provide empirical scaling laws that characterize the effectiveness of soft labels as a function of images-per-class in the distilled dataset and establish an empirical Pareto frontier for data-efficient learning. Combined, our findings challenge conventional wisdom in dataset distillation, underscore the importance of soft labels in learning, and suggest new directions for improving distillation methods. Code for all experiments is available at https://github.com/sunnytqin/no-distillation.
A Label is Worth A Thousand Images in Dataset Distillation
[ "Tian Qin", "Zhiwei Deng", "David Alvarez-Melis" ]
NeurIPS.cc/2024/Conference
2406.10485
[ "https://github.com/sunnytqin/no-distillation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oMHpejyGdx
@inproceedings{ wan2024promptagnostic, title={Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models}, author={Cong Wan and Yuhang He and Xiang Song and Yihong Gong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oMHpejyGdx} }
Diffusion models have revolutionized customized text-to-image generation, allowing for efficient synthesis of photos from personal data with textual descriptions. However, these advancements bring forth risks including privacy breaches and unauthorized replication of artworks. Previous researches primarily center around using “prompt-specific methods” to generate adversarial examples to protect personal images, yet the effectiveness of existing methods is hindered by constrained adaptability to different prompts. In this paper, we introduce a Prompt-Agnostic Adversarial Perturbation (PAP) method for customized diffusion models. PAP first models the prompt distribution using a Laplace Approximation, and then produces prompt-agnostic perturbations by maximizing a disturbance expectation based on the modeled distribution. This approach effectively tackles the prompt-agnostic attacks, leading to improved defense stability. Extensive experiments in face privacy and artistic style protection, demonstrate the superior generalization of our method in comparison to existing techniques.
Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models
[ "Cong Wan", "Yuhang He", "Xiang Song", "Yihong Gong" ]
NeurIPS.cc/2024/Conference
2408.10571
[ "https://github.com/vancyland/vancyland.github.io-project-PAP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oLoqHRbXYE
@inproceedings{ hu2024selftaught, title={Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models}, author={Yuchen Hu and Chen Chen and Chao-Han Huck Yang and Chengwei Qin and Pin-Yu Chen and EngSiong Chng and Chao Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oLoqHRbXYE} }
We propose an unsupervised adaptation framework, Self-TAught Recognizer (STAR), which leverages unlabeled data to enhance the robustness of automatic speech recognition (ASR) systems in diverse target domains, such as noise and accents. STAR is developed for prevalent speech foundation models based on Transformer-related architecture with auto-regressive decoding (e.g., Whisper, Canary). Specifically, we propose a novel indicator that empirically integrates step-wise information during decoding to assess the token-level quality of pseudo labels without ground truth, thereby guiding model updates for effective unsupervised adaptation. Experimental results show that STAR achieves an average of 13.5% relative reduction in word error rate across 14 target domains, and it sometimes even approaches the upper-bound performance of supervised adaptation. Surprisingly, we also observe that STAR prevents the adapted model from the common catastrophic forgetting problem without recalling source-domain data. Furthermore, STAR exhibits high data efficiency that only requires less than one-hour unlabeled data, and seamless generality to alternative large speech models and speech translation tasks. Our code aims to open source to the research communities.
Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models
[ "Yuchen Hu", "Chen Chen", "Chao-Han Huck Yang", "Chengwei Qin", "Pin-Yu Chen", "EngSiong Chng", "Chao Zhang" ]
NeurIPS.cc/2024/Conference
2405.14161
[ "https://github.com/yuchen005/star-adapt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oLcPadFrY3
@inproceedings{ li2024adapkc, title={Ada{PKC}: PeakConv with Adaptive Peak Receptive Field for Radar Semantic Segmentation}, author={Teng Li and Liwen Zhang and Youcheng Zhang and ZijunHu and Pengcheng Pi and Zongqing Lu and Qingmin Liao and Zhe Ma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oLcPadFrY3} }
Deep learning-based radar detection technology is receiving increasing attention in areas such as autonomous driving, UAV surveillance, and marine monitoring. Among recent efforts, PeakConv (PKC) provides a solution that can retain the peak response characteristics of radar signals and play the characteristics of deep convolution, thereby improving the effect of radar semantic segmentation (RSS). However, due to the use of a pre-set fixed peak receptive field sampling rule, PKC still has limitations in dealing with problems such as inconsistency of target frequency domain response broadening, non-homogeneous and time-varying characteristic of noise/clutter distribution. Therefore, this paper proposes an idea of adaptive peak receptive field, and upgrades PKC to AdaPKC based on this idea. Beyond that, a novel fine-tuning technology to further boost the performance of AdaPKC-based RSS networks is presented. Through experimental verification using various real-measured radar data (including publicly available low-cost millimeter-wave radar dataset for autonomous driving and self-collected Ku-band surveillance radar dataset), we found that the performance of AdaPKC-based models surpasses other SoTA methods in RSS tasks. The code is available at https://github.com/lihua199710/AdaPKC.
AdaPKC: PeakConv with Adaptive Peak Receptive Field for Radar Semantic Segmentation
[ "Teng Li", "Liwen Zhang", "Youcheng Zhang", "ZijunHu", "Pengcheng Pi", "Zongqing Lu", "Qingmin Liao", "Zhe Ma" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oFgTScAsBr
@inproceedings{ ma2024masked, title={Masked Pre-training Enables Universal Zero-shot Denoiser}, author={Xiaoxiao Ma and Zhixiang Wei and Yi Jin and Pengyang Ling and Tianle Liu and Ben Wang and Junkang Dai and Huaian Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oFgTScAsBr} }
In this work, we observe that model trained on vast general images via masking strategy, has been naturally embedded with their distribution knowledge, thus spontaneously attains the underlying potential for strong image denoising. Based on this observation, we propose a novel zero-shot denoising paradigm, i.e., $\textbf{M}$asked $\textbf{P}$re-train then $\textbf{I}$terative fill ($\textbf{MPI}$). MPI first trains model via masking and then employs pre-trained weight for high-quality zero-shot image denoising on a single noisy image. Concretely, MPI comprises two key procedures: $\textbf{1) Masked Pre-training}$ involves training model to reconstruct massive natural images with random masking for generalizable representations, gathering the potential for valid zero-shot denoising on images with varying noise degradation and even in distinct image types. $\textbf{2) Iterative filling}$ exploits pre-trained knowledge for effective zero-shot denoising. It iteratively optimizes the image by leveraging pre-trained weights, focusing on alternate reconstruction of different image parts, and gradually assembles fully denoised image within limited number of iterations. Comprehensive experiments across various noisy scenarios underscore the notable advances of MPI over previous approaches with a marked reduction in inference time.
Masked Pre-training Enables Universal Zero-shot Denoiser
[ "Xiaoxiao Ma", "Zhixiang Wei", "Yi Jin", "Pengyang Ling", "Tianle Liu", "Ben Wang", "Junkang Dai", "Huaian Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oEVsxVdush
@inproceedings{ sun2024soft, title={Soft Tensor Product Representations for Fully Continuous, Compositional Visual Representations}, author={Bethia Sun and Maurice Pagnucco and Yang Song}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oEVsxVdush} }
Since the inception of the classicalist vs. connectionist debate, it has been argued that the ability to systematically combine symbol-like entities into compositional representations is crucial for human intelligence. In connectionist systems, the field of disentanglement has emerged to address this need by producing representations with explicitly separated factors of variation (FoV). By treating the overall representation as a *string-like concatenation* of the inferred FoVs, however, disentanglement provides a fundamentally *symbolic* treatment of compositional structure, one inherently at odds with the underlying *continuity* of deep learning vector spaces. We hypothesise that this symbolic-continuous mismatch produces broadly suboptimal performance in deep learning models that learn or use such representations. To fully align compositional representations with continuous vector spaces, we extend Smolensky's Tensor Product Representation (TPR) and propose a new type of inherently *continuous* compositional representation, *Soft TPR*, along with a theoretically-principled architecture, *Soft TPR Autoencoder*, designed specifically for learning Soft TPRs. In the visual representation learning domain, our Soft TPR confers broad benefits over symbolic compositional representations: state-of-the-art disentanglement and improved representation learner convergence, along with enhanced sample efficiency and superior low-sample regime performance for downstream models, empirically affirming the value of our inherently *continuous* compositional representation learning framework.
Soft Tensor Product Representations for Fully Continuous, Compositional Visual Representations
[ "Bethia Sun", "Maurice Pagnucco", "Yang Song" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oEKFPSOWpp
@inproceedings{ liu2024neuralsteiner, title={NeuralSteiner: Learning Steiner Tree for Overflow-avoiding Global Routing in Chip Design}, author={Ruizhi Liu and ZhishengZeng and Shizhe Ding and Jingyan Sui and Xingquan Li and Dongbo Bu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oEKFPSOWpp} }
Global routing plays a critical role in modern chip design. The routing paths generated by global routers often form a rectilinear Steiner tree (RST). Recent advances from the machine learning community have shown the power of learning-based route generation; however, the yielded routing paths by the existing approaches often suffer from considerable overflow, thus greatly hindering their application in practice. We propose NeuralSteiner, an accurate approach to overflow-avoiding global routing in chip design. The key idea of NeuralSteiner approach is to learn Steiner trees: we first predict the locations of highly likely Steiner points by adopting a neural network considering full-net spatial and overflow information, then select appropriate points by running a graph-based post-processing algorithm, and finally connect these points with the input pins to yield overflow-avoiding RSTs. NeuralSteiner offers two advantages over previous learning-based models. First, by using the learning scheme, NeuralSteiner ensures the connectivity of generated routes while significantly reducing congestion. Second, NeuralSteiner can effectively scale to large nets and transfer to unseen chip designs without any modifications or fine-tuning. Extensive experiments over public large-scale benchmarks reveal that, compared with the state-of-the-art deep generative methods, NeuralSteiner achieves up to a 99.8\% reduction in overflow while speeding up the generation and maintaining a slight wirelength loss within only 1.8\%.
NeuralSteiner: Learning Steiner Tree for Overflow-avoiding Global Routing in Chip Design
[ "Ruizhi Liu", "ZhishengZeng", "Shizhe Ding", "Jingyan Sui", "Xingquan Li", "Dongbo Bu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oDeqjIM9Sk
@inproceedings{ kobayashi2024weight, title={Weight decay induces low-rank attention layers}, author={Seijin Kobayashi and Yassir Akram and Johannes Von Oswald}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oDeqjIM9Sk} }
The effect of regularizers such as weight decay when training deep neural networks is not well understood. We study the influence of weight decay as well as $L2$-regularization when training neural network models in which parameter matrices interact multiplicatively. This combination is of particular interest as this parametrization is common in attention layers, the workhorse of transformers. Here, key-query, as well as value-projection parameter matrices, are multiplied directly with each other: $W_K^TW_Q$ and $PW_V$. We extend previous results and show on one hand that any local minimum of a $L2$-regularized loss of the form $L(AB^\top) + \lambda (\|A\|^2 + \|B\|^2)$ coincides with a minimum of the nuclear norm-regularized loss $L(AB^\top) + \lambda\|AB^\top\|_*$, and on the other hand that the 2 losses become identical exponentially quickly during training. We thus complement existing works linking $L2$-regularization with low-rank regularization, and in particular, explain why such regularization on the matrix product affects early stages of training. Based on these theoretical insights, we verify empirically that the key-query and value-projection matrix products $W_K^TW_Q, PW_V$ within attention layers, when optimized with weight decay, as usually done in vision tasks and language modelling, indeed induce a significant reduction in the rank of $W_K^TW_Q$ and $PW_V$, even in fully online training. We find that, in accordance with existing work, inducing low rank in attention matrix products can damage language model performance, and observe advantages when decoupling weight decay in attention layers from the rest of the parameters.
Weight decay induces low-rank attention layers
[ "Seijin Kobayashi", "Yassir Akram", "Johannes Von Oswald" ]
NeurIPS.cc/2024/Conference
2410.23819
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oCGkSH7ys2
@inproceedings{ cheng2024selfplaying, title={Self-playing Adversarial Language Game Enhances {LLM} Reasoning}, author={Pengyu Cheng and Tianhao Hu and Han Xu and Zhisong Zhang and Yong Dai and Lei Han and nan du and Xiaolong Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oCGkSH7ys2} }
We explore the potential of self-play training for large language models (LLMs) in a two-player adversarial language game called Adversarial Taboo. In this game, an attacker and a defender communicate around a target word only visible to the attacker. The attacker aims to induce the defender to speak the target word unconsciously, while the defender tries to infer the target word from the attacker's utterances. To win the game, both players must have sufficient knowledge about the target word and high-level reasoning ability to infer and express in this information-reserved conversation. Hence, we are curious about whether LLMs' reasoning ability can be further enhanced by Self-Playing this Adversarial language Game (SPAG). With this goal, we select several open-source LLMs and let each act as the attacker and play with a copy of itself as the defender on an extensive range of target words. Through reinforcement learning on the game outcomes, we observe that the LLMs' performances uniformly improve on a broad range of reasoning benchmarks. Furthermore, iteratively adopting this self-play process can continuously promote LLMs' reasoning abilities. The code is available at https://github.com/Linear95/SPAG.
Self-playing Adversarial Language Game Enhances LLM Reasoning
[ "Pengyu Cheng", "Tianhao Hu", "Han Xu", "Zhisong Zhang", "Yong Dai", "Lei Han", "nan du", "Xiaolong Li" ]
NeurIPS.cc/2024/Conference
2404.10642
[ "https://github.com/linear95/spag" ]
https://huggingface.co/papers/2404.10642
0
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=oBvaZJ1C71
@inproceedings{ todd2024gavel, title={{GAVEL}: Generating Games via Evolution and Language Models}, author={Graham Todd and Alexander George Padula and Matthew Stephenson and Eric Piette and Dennis J. N. J. Soemers and Julian Togelius}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=oBvaZJ1C71} }
Automatically generating novel and interesting games is a complex task. Challenges include representing game rules in a computationally workable form, searching through the large space of potential games under most such representations, and accurately evaluating the originality and quality of previously unseen games. Prior work in automated game generation has largely focused on relatively restricted rule representations and relied on domain-specific heuristics. In this work, we explore the generation of novel games in the comparatively expansive Ludii game description language, which encodes the rules of over 1000 board games in a variety of styles and modes of play. We draw inspiration from recent advances in large language models and evolutionary computation in order to train a model that intelligently mutates and recombines games and mechanics expressed as code. We demonstrate both quantitatively and qualitatively that our approach is capable of generating new and interesting games, including in regions of the potential rules space not covered by existing games in the Ludii dataset.
GAVEL: Generating Games via Evolution and Language Models
[ "Graham Todd", "Alexander George Padula", "Matthew Stephenson", "Eric Piette", "Dennis J. N. J. Soemers", "Julian Togelius" ]
NeurIPS.cc/2024/Conference
2407.09388
[ "" ]
https://huggingface.co/papers/2407.09388
2
14
2
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=o9Lkiv1qpc
@inproceedings{ zhao2024identifying, title={Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model}, author={Min Zhao and Hongzhou Zhu and Chendong Xiang and Kaiwen Zheng and Chongxuan Li and Jun Zhu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=o9Lkiv1qpc} }
Diffusion models have obtained substantial progress in image-to-video generation. However, in this paper, we find that these models tend to generate videos with less motion than expected. We attribute this to the issue called conditional image leakage, where the image-to-video diffusion models (I2V-DMs) tend to over-rely on the conditional image at large time steps. We further address this challenge from both inference and training aspects. First, we propose to start the generation process from an earlier time step to avoid the unreliable large-time steps of I2V-DMs, as well as an initial noise distribution with optimal analytic expressions (Analytic-Init) by minimizing the KL divergence between it and the actual marginal distribution to bridge the training-inference gap. Second, we design a time-dependent noise distribution (TimeNoise) for the conditional image during training, applying higher noise levels at larger time steps to disrupt it and reduce the model's dependency on it. We validate these general strategies on various I2V-DMs on our collected open-domain image benchmark and the UCF101 dataset. Extensive results show that our methods outperform baselines by producing higher motion scores with lower errors while maintaining image alignment and temporal consistency, thereby yielding superior overall performance and enabling more accurate motion control. The project page: \url{https://cond-image-leak.github.io/}.
Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model
[ "Min Zhao", "Hongzhou Zhu", "Chendong Xiang", "Kaiwen Zheng", "Chongxuan Li", "Jun Zhu" ]
NeurIPS.cc/2024/Conference
2406.15735
[ "" ]
https://huggingface.co/papers/2406.15735
0
0
0
6
[]
[]
[ "Xiang-cd/DynamiCrafter-CIL" ]
[]
[]
[ "Xiang-cd/DynamiCrafter-CIL" ]
1
poster
null
https://openreview.net/forum?id=o8m4RM5mBk
@inproceedings{ zou2024attention, title={Attention Temperature Matters in ViT-Based Cross-Domain Few-Shot Learning}, author={Yixiong Zou and Ran Ma and Yuhua Li and Ruixuan Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=o8m4RM5mBk} }
Cross-domain few-shot learning (CDFSL) is proposed to transfer knowledge from large-scale source-domain datasets to downstream target-domain datasets with only a few training samples. However, Vision Transformer (ViT), as a strong backbone network to achieve many top performances, is still under-explored in the CDFSL task in its transferability against large domain gaps. In this paper, we find an interesting phenomenon of ViT in the CDFSL task: by simply multiplying a temperature (even as small as 0) to the attention in ViT blocks, the target-domain performance consistently increases, even though the attention map is downgraded to a uniform map. In this paper, we delve into this phenomenon for an interpretation. Through experiments, we interpret this phenomenon as a remedy for the ineffective target-domain attention caused by the query-key attention mechanism under large domain gaps. Based on it, we further propose a simple but effective method for the CDFSL task to boost ViT's transferability by resisting the learning of query-key parameters and encouraging that of non-query-key ones. Experiments on four CDFSL datasets validate the rationale of our interpretation and method, showing we can consistently outperform state-of-the-art methods. Our codes are available at https://github.com/Zoilsen/Attn_Temp_CDFSL.
Attention Temperature Matters in ViT-Based Cross-Domain Few-Shot Learning
[ "Yixiong Zou", "Ran Ma", "Yuhua Li", "Ruixuan Li" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=o863gX6DxA
@inproceedings{ tang2024code, title={Code Repair with {LLM}s gives an Exploration-Exploitation Tradeoff}, author={Hao Tang and Keya Hu and Jin Peng Zhou and Si Cheng Zhong and Wei-Long Zheng and Xujie Si and Kevin Ellis}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=o863gX6DxA} }
Iteratively improving and repairing source code with large language models (LLMs), known as refinement, has emerged as a popular way of generating programs that would be too complex to construct in one shot. Given a bank of test cases, together with a candidate program, an LLM can improve that program by being prompted with failed test cases. But it remains an open question how to best iteratively refine code, with prior work employing simple greedy or breadth-first strategies. We show here that refinement exposes an explore-exploit tradeoff: exploit by refining the program that passes the most test cases, or explore by refining a lesser considered program. We frame this as an arm-acquiring bandit problem, which we solve with Thompson Sampling. The resulting LLM-based program synthesis algorithm is broadly applicable: Across loop invariant synthesis, visual reasoning puzzles, and competition programming problems, we find that our new method can solve more problems using fewer language model calls.
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff
[ "Hao Tang", "Keya Hu", "Jin Peng Zhou", "Si Cheng Zhong", "Wei-Long Zheng", "Xujie Si", "Kevin Ellis" ]
NeurIPS.cc/2024/Conference
2405.17503
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=o7DOGbZeyP
@inproceedings{ fuller2024lookhere, title={LookHere: Vision Transformers with Directed Attention Generalize and Extrapolate}, author={Anthony Fuller and Daniel Kyrollos and Yousef Yassin and James R Green}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=o7DOGbZeyP} }
High-resolution images offer more information about scenes that can improve model accuracy. However, the dominant model architecture in computer vision, the vision transformer (ViT), cannot effectively leverage larger images without finetuning — ViTs poorly extrapolate to more patches at test time, although transformers offer sequence length flexibility. We attribute this shortcoming to the current patch position encoding methods, which create a distribution shift when extrapolating. We propose a drop-in replacement for the position encoding of plain ViTs that restricts attention heads to fixed fields of view, pointed in different directions, using 2D attention masks. Our novel method, called LookHere, provides translation-equivariance, ensures attention head diversity, and limits the distribution shift that attention heads face when extrapolating. We demonstrate that LookHere improves performance on classification (avg. 1.6%), against adversarial attack (avg. 5.4%), and decreases calibration error (avg. 1.5%) — on ImageNet without extrapolation. With extrapolation, LookHere outperforms the current SoTA position encoding method, 2D-RoPE, by 21.7% on ImageNet when trained at $224^2$ px and tested at $1024^2$ px. Additionally, we release a high-resolution test set to improve the evaluation of high-resolution image classifiers, called ImageNet-HR.
LookHere: Vision Transformers with Directed Attention Generalize and Extrapolate
[ "Anthony Fuller", "Daniel Kyrollos", "Yousef Yassin", "James R Green" ]
NeurIPS.cc/2024/Conference
2405.13985
[ "https://github.com/greencubic/lookhere" ]
https://huggingface.co/papers/2405.13985
0
2
0
4
[]
[ "antofuller/ImageNet-HR" ]
[]
[]
[ "antofuller/ImageNet-HR" ]
[]
1
poster
null
https://openreview.net/forum?id=o6Hk6vld20
@inproceedings{ chamon2024constrained, title={Constrained Sampling with Primal-Dual Langevin Monte Carlo}, author={Luiz F. O. Chamon and Mohammad Reza Karimi Jaghargh and Anna Korba}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=o6Hk6vld20} }
This work considers the problem of sampling from a probability distribution known up to a normalization constant while satisfying a set of statistical constraints specified by the expected values of general nonlinear functions. This problem finds applications in, e.g., Bayesian inference, where it can constrain moments to evaluate counterfactual scenarios or enforce desiderata such as prediction fairness. Methods developed to handle support constraints, such as those based on mirror maps, barriers, and penalties, are not suited for this task. This work therefore relies on gradient descent-ascent dynamics in Wasserstein space to put forward a discrete-time primal-dual Langevin Monte Carlo algorithm (PD-LMC) that simultaneously constrains the target distribution and samples from it. We analyze the convergence of PD-LMC under standard assumptions on the target distribution and constraints, namely (strong) convexity and log-Sobolev inequalities. To do so, we bring classical optimization arguments for saddle-point algorithms to the geometry of Wasserstein space. We illustrate the relevance and effectiveness of PD-LMC in several applications.
Constrained Sampling with Primal-Dual Langevin Monte Carlo
[ "Luiz F. O. Chamon", "Mohammad Reza Karimi Jaghargh", "Anna Korba" ]
NeurIPS.cc/2024/Conference
2411.00568
[ "https://github.com/lfochamon/pdlmc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=o4coDIby7e
@inproceedings{ macdermott2024measuring, title={Measuring Goal-Directedness}, author={Matt MacDermott and James Fox and Francesco Belardinelli and Tom Everitt}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=o4coDIby7e} }
We define maximum entropy goal-directedness (MEG), a formal measure of goal- directedness in causal models and Markov decision processes, and give algorithms for computing it. Measuring goal-directedness is important, as it is a critical element of many concerns about harm from AI. It is also of philosophical interest, as goal-directedness is a key aspect of agency. MEG is based on an adaptation of the maximum causal entropy framework used in inverse reinforcement learning. It can measure goal-directedness with respect to a known utility function, a hypothesis class of utility functions, or a set of random variables. We prove that MEG satisfies several desiderata and demonstrate our algorithms with small-scale experiments.
Measuring Goal-Directedness
[ "Matt MacDermott", "James Fox", "Francesco Belardinelli", "Tom Everitt" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=o3i1JEfzKw
@inproceedings{ cai2024provable, title={Provable Partially Observable Reinforcement Learning with Privileged Information}, author={Yang Cai and Xiangyu Liu and Argyris Oikonomou and Kaiqing Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=o3i1JEfzKw} }
Partial observability of the underlying states generally presents significant challenges for reinforcement learning (RL). In practice, certain *privileged information* , e.g., the access to states from simulators, has been exploited in training and achieved prominent empirical successes. To better understand the benefits of privileged information, we revisit and examine several simple and practically used paradigms in this setting, with both computation and sample efficiency analyses. Specifically, we first formalize the empirical paradigm of *expert distillation* (also known as *teacher-student* learning), demonstrating its pitfall in finding near-optimal policies. We then identify a condition of the partially observable environment, the deterministic filter condition, under which expert distillation achieves sample and computational complexities that are *both* polynomial. Furthermore, we investigate another successful empirical paradigm of *asymmetric actor-critic*, and focus on the more challenging setting of observable partially observable Markov decision processes. We develop a belief-weighted optimistic asymmetric actor-critic algorithm with polynomial sample and quasi-polynomial computational complexities, where one key component is a new provable oracle for learning belief states that preserve *filter stability* under a misspecified model, which may be of independent interest. Finally, we also investigate the provable efficiency of partially observable multi-agent RL (MARL) with privileged information. We develop algorithms with the feature of centralized-training-with-decentralized-execution, a popular framework in empirical MARL, with polynomial sample and (quasi-)polynomial computational complexity in both paradigms above. Compared with a few recent related theoretical studies, our focus is on understanding practically inspired algorithmic paradigms, without computationally intractable oracles.
Provable Partially Observable Reinforcement Learning with Privileged Information
[ "Yang Cai", "Xiangyu Liu", "Argyris Oikonomou", "Kaiqing Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nyp59a31Ju
@inproceedings{ park2024is, title={Is Value Learning Really the Main Bottleneck in Offline {RL}?}, author={Seohong Park and Kevin Frans and Sergey Levine and Aviral Kumar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nyp59a31Ju} }
While imitation learning requires access to high-quality data, offline reinforcement learning (RL) should, in principle, perform similarly or better with substantially lower data quality by using a value function. However, current results indicate that offline RL often performs worse than imitation learning, and it is often unclear what holds back the performance of offline RL. Motivated by this observation, we aim to understand the bottlenecks in current offline RL algorithms. While poor performance of offline RL is typically attributed to an imperfect value function, we ask: *is the main bottleneck of offline RL indeed in learning the value function, or something else?* To answer this question, we perform a systematic empirical study of (1) value learning, (2) policy extraction, and (3) policy generalization in offline RL problems, analyzing how these components affect performance. We make two surprising observations. First, we find that the choice of a policy extraction algorithm significantly affects the performance and scalability of offline RL, often more so than the value learning objective. For instance, we show that common value-weighted behavioral cloning objectives (e.g., AWR) do not fully leverage the learned value function, and switching to behavior-constrained policy gradient objectives (e.g., DDPG+BC) often leads to substantial improvements in performance and scalability. Second, we find that a big barrier to improving offline RL performance is often imperfect policy generalization on test-time states out of the support of the training data, rather than policy learning on in-distribution states. We then show that the use of suboptimal but high-coverage data or test-time policy training techniques can address this generalization issue in practice. Specifically, we propose two simple test-time policy improvement methods and show that these methods lead to better performance.
Is Value Learning Really the Main Bottleneck in Offline RL?
[ "Seohong Park", "Kevin Frans", "Sergey Levine", "Aviral Kumar" ]
NeurIPS.cc/2024/Conference
2406.09329
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nxumYwxJPB
@inproceedings{ ennadir2024if, title={If You Want to Be Robust, Be Wary of Initialization}, author={Sofiane ENNADIR and Johannes F. Lutzeyer and Michalis Vazirgiannis and El houcine Bergou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nxumYwxJPB} }
Graph Neural Networks (GNNs) have demonstrated remarkable performance across a spectrum of graph-related tasks, however concerns persist regarding their vulnerability to adversarial perturbations. While prevailing defense strategies focus primarily on pre-processing techniques and adaptive message-passing schemes, this study delves into an under-explored dimension: the impact of weight initialization and associated hyper-parameters, such as training epochs, on a model’s robustness. We introduce a theoretical framework bridging the connection between initialization strategies and a network's resilience to adversarial perturbations. Our analysis reveals a direct relationship between initial weights, number of training epochs and the model’s vulnerability, offering new insights into adversarial robustness beyond conventional defense mechanisms. While our primary focus is on GNNs, we extend our theoretical framework, providing a general upper-bound applicable to Deep Neural Networks. Extensive experiments, spanning diverse models and real-world datasets subjected to various adversarial attacks, validate our findings. We illustrate that selecting appropriate initialization not only ensures performance on clean datasets but also enhances model robustness against adversarial perturbations, with observed gaps of up to 50\% compared to alternative initialization approaches.
If You Want to Be Robust, Be Wary of Initialization
[ "Sofiane ENNADIR", "Johannes F. Lutzeyer", "Michalis Vazirgiannis", "El houcine Bergou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nxL7eazKBI
@inproceedings{ hu2024model, title={Model {LEGO}: Creating Models Like Disassembling and Assembling Building Blocks}, author={Jiacong Hu and Jing Gao and Jingwen Ye and Yang Gao and Xingen Wang and Zunlei Feng and Mingli Song}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nxL7eazKBI} }
With the rapid development of deep learning, the increasing complexity and scale of parameters make training a new model increasingly resource-intensive. In this paper, we start from the classic convolutional neural network (CNN) and explore a paradigm that does not require training to obtain new models. Similar to the birth of CNN inspired by receptive fields in the biological visual system, we draw inspiration from the information subsystem pathways in the biological visual system and propose Model Disassembling and Assembling (MDA). During model disassembling, we introduce the concept of relative contribution and propose a component locating technique to extract task-aware components from trained CNN classifiers. For model assembling, we present the alignment padding strategy and parameter scaling strategy to construct a new model tailored for a specific task, utilizing the disassembled task-aware components. The entire process is akin to playing with LEGO bricks, enabling arbitrary assembly of new models, and providing a novel perspective for model creation and reuse. Extensive experiments showcase that task-aware components disassembled from CNN classifiers or new models assembled using these components closely match or even surpass the performance of the baseline, demonstrating its promising results for model reuse. Furthermore, MDA exhibits diverse potential applications, with comprehensive experiments exploring model decision route analysis, model compression, knowledge distillation, and more.
Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks
[ "Jiacong Hu", "Jing Gao", "Jingwen Ye", "Yang Gao", "Xingen Wang", "Zunlei Feng", "Mingli Song" ]
NeurIPS.cc/2024/Conference
2203.13453
[ "https://github.com/jiaconghu/model-lego" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nw9JmfL99s
@inproceedings{ lufkin2024nonlinear, title={Nonlinear dynamics of localization in neural receptive fields}, author={Leon Lufkin and Andrew M Saxe and Erin Grant}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nw9JmfL99s} }
Localized receptive fields—neurons that are selective for certain contiguous spatiotemporal features of their input—populate early sensory regions of the mammalian brain. Unsupervised learning algorithms that optimize explicit sparsity or independence criteria replicate features of these localized receptive fields, but fail to explain directly how localization arises through learning without efficient coding, as occurs in early layers of deep neural networks and might occur in early sensory regions of biological systems. We consider an alternative model in which localized receptive fields emerge without explicit top-down efficiency constraints—a feed-forward neural network trained on a data model inspired by the structure of natural images. Previous work identified the importance of non-Gaussian statistics to localization in this setting but left open questions about the mechanisms driving dynamical emergence. We address these questions by deriving the effective learning dynamics for a single nonlinear neuron, making precise how higher-order statistical properties of the input data drive emergent localization, and we demonstrate that the predictions of these effective dynamics extend to the many-neuron setting. Our analysis provides an alternative explanation for the ubiquity of localization as resulting from the nonlinear dynamics of learning in neural circuits
Nonlinear dynamics of localization in neural receptive fields
[ "Leon Lufkin", "Andrew M Saxe", "Erin Grant" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=nw8cXoNvep
@inproceedings{ lee2024d, title={3D Equivariant Pose Regression via Direct Wigner-D Harmonics Prediction}, author={Jongmin Lee and Minsu Cho}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nw8cXoNvep} }
Determining the 3D orientations of an object in an image, known as single-image pose estimation, is a crucial task in 3D vision applications. Existing methods typically learn 3D rotations parametrized in the spatial domain using Euler angles or quaternions, but these representations often introduce discontinuities and singularities. SO(3)-equivariant networks enable the structured capture of pose patterns with data-efficient learning, but the parametrizations in spatial domain are incompatible with their architecture, particularly spherical CNNs, which operate in the frequency domain to enhance computational efficiency. To overcome these issues, we propose a frequency-domain approach that directly predicts Wigner-D coefficients for 3D rotation regression, aligning with the operations of spherical CNNs. Our SO(3)-equivariant pose harmonics predictor overcomes the limitations of spatial parameterizations, ensuring consistent pose estimation under arbitrary rotations. Trained with a frequency-domain regression loss, our method achieves state-of-the-art results on benchmarks such as ModelNet10-SO(3) and PASCAL3D+, with significant improvements in accuracy, robustness, and data efficiency.
3D Equivariant Pose Regression via Direct Wigner-D Harmonics Prediction
[ "Jongmin Lee", "Minsu Cho" ]
NeurIPS.cc/2024/Conference
2411.00543
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nw6ANsC66G
@inproceedings{ weng2024probabilistic, title={Probabilistic Federated Prompt-Tuning with Non-{IID} and Imbalanced Data}, author={Pei-Yau Weng and Minh Hoang and Lam M. Nguyen and My T. Thai and Tsui-Wei Weng and Trong Nghia Hoang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nw6ANsC66G} }
Fine-tuning pre-trained models is a popular approach in machine learning for solving complex tasks with moderate data. However, fine-tuning the entire pre-trained model is ineffective in federated data scenarios where local data distributions are diversely skewed. To address this, we explore integrating federated learning with a more effective prompt-tuning method, optimizing for a small set of input prefixes to reprogram the pre-trained model's behavior. Our approach transforms federated learning into a distributed set modeling task, aggregating diverse sets of prompts to globally fine-tune the pre-trained model. We benchmark various baselines based on direct adaptations of existing federated model aggregation techniques and introduce a new probabilistic prompt aggregation method that substantially outperforms these baselines. Our reported results on a variety of computer vision datasets confirm that the proposed method is most effective to combat extreme data heterogeneity in federated learning.
Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data
[ "Pei-Yau Weng", "Minh Hoang", "Lam M. Nguyen", "My T. Thai", "Tsui-Wei Weng", "Trong Nghia Hoang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nw4TWuEPGx
@inproceedings{ bell2024discovering, title={Discovering plasticity rules that organize and maintain neural circuits}, author={David G Bell and Alison Duffy and Adrienne Fairhall}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nw4TWuEPGx} }
Intrinsic dynamics within the brain can accelerate learning by providing a prior scaffolding for dynamics aligned with task objectives. Such intrinsic dynamics would ideally self-organize and self-sustain in the face of biological noise including synaptic turnover and cell death. An example of such dynamics is the formation of sequences, a ubiquitous motif in neural activity. The sequence-generating circuit in zebra finch HVC provides a reliable timing scaffold for motor output in song and demonstrates a remarkable capacity for unsupervised recovery following perturbation. Inspired by HVC, we seek a local plasticity rule capable of organizing and maintaining sequence-generating dynamics despite continual network perturbations. We adopt a meta-learning approach introduced by Confavreux et al, which parameterizes a learning rule using basis functions constructed from pre- and postsynaptic activity and synapse size, with tunable time constants. Candidate rules are simulated within initially random networks, and their fitness is evaluated according to a loss function that measures the fidelity with which the resulting dynamics encode time. We use this approach to introduce biological noise, forcing meta-learning to find robust solutions. We first show that, in the absence of perturbations, meta-learning identifies a temporally asymmetric generalization of Oja's rule that reliably organizes sparse sequential activity. When synaptic turnover is introduced, the learned rule incorporates a form of homeostasis, better maintaining robust sequential dynamics relative to other previously proposed rules. Additionally, inspired by recent findings demonstrating that the strength of projections from inhibitory interneurons in HVC also dynamically responds to perturbations, we explore the role of inhibitory plasticity in sequence-generating circuits. We find that learned plasticity adjusts both excitation and inhibition in response to manipulations, outperforming rules applied only to excitatory connections. We demonstrate how plasticity acting on both excitatory and inhibitory synapses can better shape excitatory cell dynamics to scaffold timing representations.
Discovering plasticity rules that organize and maintain neural circuits
[ "David G Bell", "Alison Duffy", "Adrienne Fairhall" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nvn80cscVm
@inproceedings{ wei2024differank, title={Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models}, author={Lai Wei and Zhiquan Tan and Chenghai Li and Jindong Wang and Weiran Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nvn80cscVm} }
Large Language Models (LLMs) have transformed natural language processing and extended their powerful capabilities to multi-modal domains. As LLMs continue to advance, it is crucial to develop diverse and appropriate metrics for their evaluation. In this paper, we introduce a novel rank-based metric, Diff-eRank, grounded in information theory and geometry principles. Diff-eRank assesses LLMs by analyzing their hidden representations, providing a quantitative measure of how efficiently they eliminate redundant information during training. We demonstrate the applicability of Diff-eRank in both single-modal (e.g., language) and multi-modal settings. For language models, our results show that Diff-eRank increases with model size and correlates well with conventional metrics such as loss and accuracy. In the multi-modal context, we propose an alignment evaluation method based on the eRank, and verify that contemporary multi-modal LLMs exhibit strong alignment performance based on our method. Our code is publicly available at https://github.com/waltonfuture/Diff-eRank.
Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models
[ "Lai Wei", "Zhiquan Tan", "Chenghai Li", "Jindong Wang", "Weiran Huang" ]
NeurIPS.cc/2024/Conference
2401.17139
[ "https://github.com/waltonfuture/Diff-eRank" ]
https://huggingface.co/papers/2401.17139
2
2
3
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nvYDPF4LJK
@inproceedings{ wu2024visionllm, title={Vision{LLM} v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language Tasks}, author={Jiannan Wu and Muyan Zhong and Sen Xing and Zeqiang Lai and Zhaoyang Liu and Zhe Chen and Wenhai Wang and Xizhou Zhu and Lewei Lu and Tong Lu and Ping Luo and Yu Qiao and Jifeng Dai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nvYDPF4LJK} }
We present VisionLLM v2, an end-to-end generalist multimodal large model (MLLM) that unifies visual perception, understanding, and generation within a single framework. Unlike traditional MLLMs limited to text output, VisionLLM v2 significantly broadens its application scope. It excels not only in conventional visual question answering (VQA) but also in open-ended, cross-domain vision tasks such as object localization, pose estimation, and image generation and editing. To this end, we propose a new information transmission mechanism termed ``super link'', as a medium to connect MLLM with task-specific decoders. It not only allows flexible transmission of task information and gradient feedback between the MLLM and multiple downstream decoders but also effectively resolves training conflicts in multi-tasking scenarios. In addition, to support the diverse range of tasks, we carefully collected and combed training data from hundreds of public vision and vision-language tasks. In this way, our model can be joint-trained end-to-end on hundreds of vision language tasks and generalize to these tasks using a set of shared parameters through different user prompts, achieving performance comparable to task-specific models. We believe VisionLLM v2 will offer a new perspective on the generalization of MLLMs.
VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language Tasks
[ "Jiannan Wu", "Muyan Zhong", "Sen Xing", "Zeqiang Lai", "Zhaoyang Liu", "Zhe Chen", "Wenhai Wang", "Xizhou Zhu", "Lewei Lu", "Tong Lu", "Ping Luo", "Yu Qiao", "Jifeng Dai" ]
NeurIPS.cc/2024/Conference
2406.08394
[ "https://github.com/opengvlab/visionllm" ]
https://huggingface.co/papers/2406.08394
0
0
1
13
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nv7ox1vd3q
@inproceedings{ kaushik2024precise, title={Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks}, author={Chiraag Kaushik and Justin Romberg and Vidya Muthukumar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nv7ox1vd3q} }
The classical iteratively reweighted least-squares (IRLS) algorithm aims to recover an unknown signal from linear measurements by performing a sequence of weighted least squares problems, where the weights are recursively updated at each step. Varieties of this algorithm have been shown to achieve favorable empirical performance and theoretical guarantees for sparse recovery and $\ell_p$-norm minimization. Recently, some preliminary connections have also been made between IRLS and certain types of non-convex linear neural network architectures that are observed to exploit low-dimensional structure in high-dimensional linear models. In this work, we provide a unified asymptotic analysis for a family of algorithms that encompasses IRLS, the recently proposed lin-RFM algorithm (which was motivated by feature learning in neural networks), and the alternating minimization algorithm on linear diagonal neural networks. Our analysis operates in a "batched" setting with i.i.d. Gaussian covariates and shows that, with appropriately chosen reweighting policy, the algorithm can achieve favorable performance in only a handful of iterations. We also extend our results to the case of group-sparse recovery and show that leveraging this structure in the reweighting scheme provably improves test error compared to coordinate-wise reweighting.
Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks
[ "Chiraag Kaushik", "Justin Romberg", "Vidya Muthukumar" ]
NeurIPS.cc/2024/Conference
2406.02769
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nv2Qt5cj1a
@inproceedings{ li2024membership, title={Membership Inference Attacks against Large Vision-Language Models}, author={Zhan Li and Yongtao Wu and Yihang Chen and Francesco Tonin and Elias Abad Rocamora and Volkan Cevher}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nv2Qt5cj1a} }
Large vision-language models (VLLMs) exhibit promising capabilities for processing multi-modal tasks across various application scenarios. However, their emergence also raises significant data security concerns, given the potential inclusion of sensitive information, such as private photos and medical records, in their training datasets. Detecting inappropriately used data in VLLMs remains a critical and unresolved issue, mainly due to the lack of standardized datasets and suitable methodologies. In this study, we introduce the first membership inference attack (MIA) benchmark tailored for various VLLMs to facilitate training data detection. Then, we propose a novel MIA pipeline specifically designed for token-level image detection. Lastly, we present a new metric called MaxRényi-K%, which is based on the confidence of the model output and applies to both text and image data. We believe that our work can deepen the understanding and methodology of MIAs in the context of VLLMs. Our code and datasets are available at https://github.com/LIONS-EPFL/VL-MIA.
Membership Inference Attacks against Large Vision-Language Models
[ "Zhan Li", "Yongtao Wu", "Yihang Chen", "Francesco Tonin", "Elias Abad Rocamora", "Volkan Cevher" ]
NeurIPS.cc/2024/Conference
2411.02902
[ "https://github.com/lions-epfl/vl-mia" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nuZv2iTlvn
@inproceedings{ iyer2024noneuclidean, title={Non-Euclidean Mixture Model for Social Network Embedding}, author={Roshni Iyer and YEWEN WANG and Wei Wang and Yizhou Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nuZv2iTlvn} }
It is largely agreed that social network links are formed due to either homophily or social influence. Inspired by this, we aim at understanding the generation of links via providing a novel embedding-based graph formation model. Different from existing graph representation learning, where link generation probabilities are defined as a simple function of the corresponding node embeddings, we model the link generation as a mixture model of the two factors. In addition, we model the homophily factor in spherical space and the influence factor in hyperbolic space to accommodate the fact that (1) homophily results in cycles and (2) influence results in hierarchies in networks. We also design a special projection to align these two spaces. We call this model Non-Euclidean Mixture Model, i.e., NMM. We further integrate NMM with our non-Euclidean graph variational autoencoder (VAE) framework, NMM-GNN. NMM-GNN learns embeddings through a unified framework which uses non-Euclidean GNN encoders, non-Euclidean Gaussian priors, a non-Euclidean decoder, and a novel space unification loss component to unify distinct non-Euclidean geometric spaces. Experiments on public datasets show NMM-GNN significantly outperforms state-of-the-art baselines on social network generation and classification tasks, demonstrating its ability to better explain how the social network is formed.
Non-Euclidean Mixture Model for Social Network Embedding
[ "Roshni Iyer", "YEWEN WANG", "Wei Wang", "Yizhou Sun" ]
NeurIPS.cc/2024/Conference
2411.04876
[ "https://github.com/roshnigiyer/nmm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ntlFREw59A
@inproceedings{ pan2024actanywhere, title={ActAnywhere: Subject-Aware Video Background Generation}, author={Boxiao Pan and Zhan Xu and Chun-Hao Paul Huang and Krishna Kumar Singh and Yang Zhou and Leonidas Guibas and Jimei Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ntlFREw59A} }
We study a novel problem to automatically generate video background that tailors to foreground subject motion. It is an important problem for the movie industry and visual effects community, which traditionally requires tedious manual efforts to solve. To this end, we propose ActAnywhere, a video diffusion model that takes as input a sequence of foreground subject segmentation and an image of a novel background and generates a video of the subject interacting in this background. We train our model on a large-scale dataset of 2.4M videos of human-scene interactions. Through extensive evaluation, we show that our model produces videos with realistic foreground-background interaction while strictly following the guidance of the condition image. Our model generalizes to diverse scenarios including non-human subjects, gaming and animation clips, as well as videos with multiple moving subjects. Both quantitative and qualitative comparisons demonstrate that our model significantly outperforms existing methods, which fail to accomplish the studied task. Please visit our project webpage at https://actanywhere.github.io.
ActAnywhere: Subject-Aware Video Background Generation
[ "Boxiao Pan", "Zhan Xu", "Chun-Hao Paul Huang", "Krishna Kumar Singh", "Yang Zhou", "Leonidas Guibas", "Jimei Yang" ]
NeurIPS.cc/2024/Conference
2401.10822
[ "" ]
https://huggingface.co/papers/2401.10822
3
13
1
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=ntV5xZfzEk
@inproceedings{ pr{\r{u}}{\v{s}}a2024constrained, title={Constrained Binary Decision Making}, author={Daniel Pr{\r{u}}{\v{s}}a and Vojtech Franc}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ntV5xZfzEk} }
Binary statistical decision making involves choosing between two states based on statistical evidence. The optimal decision strategy is typically formulated through a constrained optimization problem, where both the objective and constraints are expressed as integrals involving two Lebesgue measurable functions, one of which represents the strategy being optimized. In this work, we present a comprehensive formulation of the binary decision making problem and provide a detailed characterization of the optimal solution. Our framework encompasses a wide range of well-known and recently proposed decision making problems as specific cases. We demonstrate how our generic approach can be used to derive the optimal decision strategies for these diverse instances. Our results offer a robust mathematical tool that simplifies the process of solving both existing and novel formulations of binary decision making problems which are in the core of many Machine Learning algorithms.
Constrained Binary Decision Making
[ "Daniel Průša", "Vojtech Franc" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ntF7D8tAlQ
@inproceedings{ tan2024estimating, title={Estimating Generalization Performance Along the Trajectory of Proximal {SGD} in Robust Regression}, author={Kai Tan and Pierre C Bellec}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ntF7D8tAlQ} }
This paper studies the generalization performance of iterates obtained by Gradient Descent (GD), Stochastic Gradient Descent (SGD) and their proximal variants in high-dimensional robust regression problems. The number of features is comparable to the sample size and errors may be heavy-tailed. We introduce estimators that precisely track the generalization error of the iterates along the trajectory of the iterative algorithm. These estimators are provably consistent under suitable conditions. The results are illustrated through several examples, including Huber regression, pseudo-Huber regression, and their penalized variants with non-smooth regularizer. We provide explicit generalization error estimates for iterates generated from GD and SGD, or from proximal SGD in the presence of a non-smooth regularizer. The proposed risk estimates serve as effective proxies for the actual generalization error, allowing us to determine the optimal stopping iteration that minimizes the generalization error. Extensive simulations confirm the effectiveness of the proposed generalization error estimates.
Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression
[ "Kai Tan", "Pierre C Bellec" ]
NeurIPS.cc/2024/Conference
2410.02629
[ "https://github.com/kaitan365/sgd-generlization-errors" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nrgyOGU7ZP
@inproceedings{ desai2024ss, title={{SS}1: Accelerating Inference with Fast and Expressive Sketch Structured Transform}, author={Aditya Desai and Kimia Saedi and Apoorv Walia and Jihyeong Lee and Keren Zhou and Anshumali Shrivastava}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nrgyOGU7ZP} }
Tensor multiplication with learned weight matrices is the fundamental building block in deep learning models. These matrices can often be sparsified, decomposed, quantized, or subjected to random parameter sharing without losing accuracy, suggesting the possibility of more efficient transforms. Although many variants of weight matrices exist, unstructured ones are incompatible with modern hardware, slowing inference and training. On the other hand, structured variants often limit expressivity or fail to deliver the promised latency benefits. We present Sketch Structured Transform (SS1), an expressive and GPU-friendly operator that accelerates inference. SS1 leverages parameter sharing in a random yet structured manner to reduce computation while retraining the rich expressive nature of parameter sharing. We confirm empirically that SS1 offers better quality-efficiency tradeoffs than competing variants. Interestingly SS1 can be combined with Quantization to achieve gains unattainable by either method alone, a finding we justify via theoretical analysis. The analysis may be of independent interest. Moreover, existing pre-trained models can be projected onto SS1 and finetuned for efficient deployment. Surprisingly, these projected models can perform reasonably well even without finetuning. Our experiments highlight various applications of the SS1: (a) Training GPT2 and DLRM models from scratch for faster inference. (b) Finetuning projected BERT models for 1.31× faster inference while maintaining GLUE scores. (c) Proof of concept with Llama-3-8b, showing 1.11× faster wall clock inference using projected SS1 layers without finetuning. We open source our code :https://github.com/apd10/Sketch-Structured-Linear/
SS1: Accelerating Inference with Fast and Expressive Sketch Structured Transform
[ "Aditya Desai", "Kimia Saedi", "Apoorv Walia", "Jihyeong Lee", "Keren Zhou", "Anshumali Shrivastava" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nqWaya7hiX
@inproceedings{ zhang2024wings, title={Wings: Learning Multimodal {LLM}s without Text-only Forgetting}, author={Yi-Kai Zhang and Shiyin Lu and Yang Li and YanQing Ma and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and De-Chuan Zhan and Han-Jia Ye}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nqWaya7hiX} }
Multimodal large language models (MLLMs), initiated with a trained LLM, first align images with text and then fine-tune on multimodal mixed inputs. However, during the continued training, the MLLM catastrophically forgets the text-only instructions that the initial LLM masters. In this paper, we present Wings, a novel MLLM that excels in both text-only and multimodal instructions. By examining attention across layers of MLLM, we find that *text-only forgetting* is related to the attention shifts from pre-image to post-image text. From that, we construct an additional Low-Rank Residual Attention (LoRRA) block that acts as the "modality learner" to expand the learnable space and compensate for the attention shift. The complementary learners, like "wings" on either side, are connected in parallel to each layer's attention block. The LoRRA mirrors the structure of attention but utilizes low-rank connections to ensure efficiency. Initially, image and text inputs are aligned with visual learners operating alongside the main attention, balancing focus on visual elements. Later, textual learners are integrated with token-wise routing, blending the outputs of both modality learners collaboratively. Our experimental results demonstrate that Wings outperforms equally-scaled MLLMs in both text-only and visual question-answering tasks. Wings with *compensation of learners* addresses text-only forgetting during visual modality expansion in general MLLMs.
Wings: Learning Multimodal LLMs without Text-only Forgetting
[ "Yi-Kai Zhang", "Shiyin Lu", "Yang Li", "YanQing Ma", "Qing-Guo Chen", "Zhao Xu", "Weihua Luo", "Kaifu Zhang", "De-Chuan Zhan", "Han-Jia Ye" ]
NeurIPS.cc/2024/Conference
2406.03496
[ "" ]
https://huggingface.co/papers/2406.03496
0
0
0
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=npoHt6WV1F
@inproceedings{ sun2024neuralfuse, title={NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes}, author={Hao-Lun Sun and Lei Hsiung and Nandhini Chandramoorthy and Pin-Yu Chen and Tsung-Yi Ho}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=npoHt6WV1F} }
Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains problematically high. An effective strategy for reducing such consumption is supply-voltage reduction, but if done too aggressively, it can lead to accuracy degradation. This is due to random bit-flips in static random access memory (SRAM), where model parameters are stored. To address this challenge, we have developed NeuralFuse, a novel add-on module that handles the energy-accuracy tradeoff in low-voltage regimes by learning input transformations and using them to generate error-resistant data representations, thereby protecting DNN accuracy in both nominal and low-voltage scenarios. As well as being easy to implement, NeuralFuse can be readily applied to DNNs with limited access, such cloud-based APIs that are accessed remotely or non-configurable hardware. Our experimental results demonstrate that, at a 1% bit-error rate, NeuralFuse can reduce SRAM access energy by up to 24% while recovering accuracy by up to 57%. To the best of our knowledge, this is the first approach to addressing low-voltage-induced bit errors that requires no model retraining.
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes
[ "Hao-Lun Sun", "Lei Hsiung", "Nandhini Chandramoorthy", "Pin-Yu Chen", "Tsung-Yi Ho" ]
NeurIPS.cc/2024/Conference
2306.16869
[ "https://github.com/ibm/neuralfuse" ]
https://huggingface.co/papers/2306.16869
2
5
0
5
[]
[]
[ "TrustSafeAI/NeuralFuse" ]
[]
[]
[ "TrustSafeAI/NeuralFuse" ]
1
poster
null
https://openreview.net/forum?id=npJQ6qS4bg
@inproceedings{ he2024understanding, title={Understanding and Minimising Outlier Features in Transformer Training}, author={Bobby He and Lorenzo Noci and Daniele Paliotta and Imanol Schlag and Thomas Hofmann}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=npJQ6qS4bg} }
Outlier Features (OFs) are neurons whose activation magnitudes significantly exceed the average over a neural network's (NN) width. They are well known to emerge during standard transformer training and have the undesirable effect of hindering quantisation in afflicted models. Despite their practical importance, little is known behind why OFs emerge during training, nor how one can minimise them. Our work focuses on the above questions, first identifying several quantitative metrics, such as the kurtosis over neuron activation norms, to measure OFs. With these metrics, we study how architectural and optimisation choices influence OFs, and provide practical insights to minimise OFs during training. As highlights, we introduce a novel unnormalised transformer block, the Outlier Protected block, and present a previously unknown benefit of non-diagonal preconditioning optimisers, finding both approaches to significantly reduce OFs and improve quantisation without compromising convergence speed, at scales of up to 7B parameters. Notably, our combination of OP block and non-diagonal preconditioner (SOAP) achieves 14.87 weight-and-activation int8 perplexity (from 14.71 in standard precision), compared to 63.4 int8 perplexity (from 16.00) with a default OF-prone combination of Pre-Norm model and Adam, when quantising OPT-125m models post-training.
Understanding and Minimising Outlier Features in Transformer Training
[ "Bobby He", "Lorenzo Noci", "Daniele Paliotta", "Imanol Schlag", "Thomas Hofmann" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nmUkwoOHFO
@inproceedings{ doimo2024the, title={The Representation Landscape of Few-Shot Learning and Fine-Tuning in Large Language Models}, author={Diego Doimo and Alessandro Pietro Serra and Alessio ansuini and Alberto Cazzaniga}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nmUkwoOHFO} }
In-context learning (ICL) and supervised fine-tuning (SFT) are two common strategies for improving the performance of modern large language models (LLMs) on specific tasks. Despite their different natures, these strategies often lead to comparable performance gains. However, little is known about whether they induce similar representations inside LLMs. We approach this problem by analyzing the probability landscape of their hidden representations in the two cases. More specifically, we compare how LLMs solve the same question-answering task, finding that ICL and SFT create very different internal structures, in both cases undergoing a sharp transition in the middle of the network. In the first half of the network, ICL shapes interpretable representations hierarchically organized according to their semantic content. In contrast, the probability landscape obtained with SFT is fuzzier and semantically mixed. In the second half of the model, the fine-tuned representations develop probability modes that better encode the identity of answers, while less-defined peaks characterize the landscape of ICL representations. Our approach reveals the diverse computational strategies developed inside LLMs to solve the same task across different conditions, allowing us to make a step towards designing optimal methods to extract information from language models.
The Representation Landscape of Few-Shot Learning and Fine-Tuning in Large Language Models
[ "Diego Doimo", "Alessandro Pietro Serra", "Alessio ansuini", "Alberto Cazzaniga" ]
NeurIPS.cc/2024/Conference
2409.03662
[ "https://github.com/diegodoimo/geometry_icl_finetuning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nkzSE5KkCA
@inproceedings{ ruan2024enhancing, title={Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning}, author={Penghui Ruan and Pichao WANG and Divya Saxena and Jiannong Cao and Yuhui Shi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nkzSE5KkCA} }
Despite advancements in Text-to-Video (T2V) generation, producing videos with realistic motion remains challenging. Current models often yield static or minimally dynamic outputs, failing to capture complex motions described by text. This issue stems from the internal biases in text encoding which overlooks motions, and inadequate conditioning mechanisms in T2V generation models. To address this, we propose a novel framework called DEcomposed MOtion (DEMO), which enhances motion synthesis in T2V generation by decomposing both text encoding and conditioning into content and motion components. Our method includes a content encoder for static elements and a motion encoder for temporal dynamics, alongside separate content and motion conditioning mechanisms. Crucially, we introduce text-motion and video-motion supervision to improve the model's understanding and generation of motion. Evaluations on benchmarks such as MSR-VTT, UCF-101, WebVid-10M, EvalCrafter, and VBench demonstrate DEMO's superior ability to produce videos with enhanced motion dynamics while maintaining high visual quality. Our approach significantly advances T2V generation by integrating comprehensive motion understanding directly from textual descriptions. Project page: https://PR-Ryan.github.io/DEMO-project/
Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning
[ "Penghui Ruan", "Pichao WANG", "Divya Saxena", "Jiannong Cao", "Yuhui Shi" ]
NeurIPS.cc/2024/Conference
2410.24219
[ "https://github.com/pr-ryan/demo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nkwPiBSw1f
@inproceedings{ yang2024dualpersonalizing, title={Dual-Personalizing Adapter for Federated Foundation Models}, author={yiyuan yang and Guodong Long and Tao Shen and Jing Jiang and Michael Blumenstein}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nkwPiBSw1f} }
Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning diverse instruction data. Notably, federated foundation models (FedFM) emerge as a privacy preservation method to fine-tune models collaboratively under federated learning (FL) settings by leveraging many distributed datasets with non-IID data. To alleviate communication and computation overhead, parameter-efficient methods are introduced for efficiency, and some research adapted personalization methods to FedFM for better user preferences alignment. However, a critical gap in existing research is the neglect of test-time distribution shifts in real-world applications, and conventional methods for test-time distribution shifts in personalized FL are less effective for FedFM due to their failure to adapt to complex distribution shift scenarios and the requirement to train all parameters. To bridge this gap, we refine the setting in FedFM, termed test-time personalization, which aims to learn personalized federated foundation models on clients while effectively handling test-time distribution shifts simultaneously. To address challenges in this setting, we explore a simple yet effective solution, a Federated Dual-Personalizing Adapter (FedDPA) architecture. By co-working with a foundation model, a global adapter and a local adapter jointly tackle the test-time distribution shifts and client-specific personalization. Additionally, we introduce an instance-wise dynamic weighting mechanism that dynamically integrates the global and local adapters for each test instance during inference, facilitating effective test-time personalization. The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.
Dual-Personalizing Adapter for Federated Foundation Models
[ "yiyuan yang", "Guodong Long", "Tao Shen", "Jing Jiang", "Michael Blumenstein" ]
NeurIPS.cc/2024/Conference
2403.19211
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nkHEl4n0JU
@inproceedings{ zeng2024visual, title={Visual Fourier Prompt Tuning}, author={Runjia Zeng and Cheng Han and Qifan Wang and Chunshu Wu and Tong Geng and Lifu Huang and Ying Nian Wu and Dongfang Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nkHEl4n0JU} }
With the scale of vision Transformer-based models continuing to grow, finetuning these large-scale pretrained models for new tasks has become increasingly parameter-intensive. Visual prompt tuning is introduced as a parameter-efficient finetuning (PEFT) method to this trend. Despite its successes, a notable research challenge persists within almost all PEFT approaches: significant performance degradation is observed when there is a substantial disparity between the datasets applied in pretraining and finetuning phases. To address this challenge, we draw inspiration from human visual cognition, and propose the Visual Fourier Prompt Tuning (VFPT) method as a general and effective solution for adapting large-scale transformer-based models. Our approach innovatively incorporates the Fast Fourier Transform into prompt embeddings and harmoniously considers both spatial and frequency domain information. Apart from its inherent simplicity and intuitiveness, VFPT exhibits superior performance across all datasets, offering a general solution to dataset challenges, irrespective of data disparities. Empirical results demonstrate that our approach outperforms current state-of-the-art baselines on two benchmarks, with low parameter usage (e.g., 0.57% of model parameters on VTAB-1k) and notable performance enhancements (e.g., 73.20% of mean accuracy on VTAB-1k). Our code is avaliable at https://github.com/runtsang/VFPT.
Visual Fourier Prompt Tuning
[ "Runjia Zeng", "Cheng Han", "Qifan Wang", "Chunshu Wu", "Tong Geng", "Lifu Huang", "Ying Nian Wu", "Dongfang Liu" ]
NeurIPS.cc/2024/Conference
2411.01327
[ "https://github.com/runtsang/vfpt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=njwYBFau8E
@inproceedings{ ahmed2024districtnet, title={DistrictNet: Decision-aware learning for geographical districting}, author={Cheikh Ahmed and Alexandre Forel and Axel Parmentier and Thibaut Vidal}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=njwYBFau8E} }
Districting is a complex combinatorial problem that consists in partitioning a geographical area into small districts. In logistics, it is a major strategic decision determining operating costs for several years. Solving districting problems using traditional methods is intractable even for small geographical areas and existing heuristics often provide sub-optimal results. We present a structured learning approach to find high-quality solutions to real-world districting problems in a few minutes. It is based on integrating a combinatorial optimization layer, the capacitated minimum spanning tree problem, into a graph neural network architecture. To train this pipeline in a decision-aware fashion, we show how to construct target solutions embedded in a suitable space and learn from target solutions. Experiments show that our approach outperforms existing methods as it can significantly reduce costs on real-world cities.
DistrictNet: Decision-aware learning for geographical districting
[ "Cheikh Ahmed", "Alexandre Forel", "Axel Parmentier", "Thibaut Vidal" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=njvPjG0BfK
@inproceedings{ cotnareanu2024hardcore, title={HardCore Generation: Generating Hard {UNSAT} Problems for Data Augmentation}, author={Joseph Cotnareanu and Zhanguang Zhang and Hui-Ling Zhen and Yingxue Zhang and Mark Coates}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=njvPjG0BfK} }
Efficiently determining the satisfiability of a boolean equation --- known as the SAT problem for brevity --- is crucial in various industrial problems. Recently, the advent of deep learning methods has introduced significant potential for enhancing SAT solving. However, a major barrier to the advancement of this field has been the scarcity of large, realistic datasets. The majority of current public datasets are either randomly generated or extremely limited, containing only a few examples from unrelated problem families. These datasets are inadequate for meaningful training of deep learning methods. In light of this, researchers have started exploring generative techniques to create data that more accurately reflect SAT problems encountered in practical situations. These methods have so far suffered from either the inability to produce challenging SAT problems or time-scalability obstacles. In this paper we address both by identifying and manipulating the key contributors to a problem's ``hardness'', known as cores. Although some previous work has addressed cores, the time costs are unacceptably high due to the expense of traditional heuristic core detection techniques. We introduce a fast core detection procedure that uses a graph neural network. Our empirical results demonstrate that we can efficiently generate problems that remain hard to solve and retain key attributes of the original example problems. We show via experiment that the generated synthetic SAT problems can be used in a data augmentation setting to provide improved prediction of solver runtimes.
HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation
[ "Joseph Cotnareanu", "Zhanguang Zhang", "Hui-Ling Zhen", "Yingxue Zhang", "Mark Coates" ]
NeurIPS.cc/2024/Conference
2409.18778
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=niG3Yyb6oA
@inproceedings{ liu2024a, title={A Layer-Wise Natural Gradient Optimizer for Training Deep Neural Networks}, author={Xiaolei Liu and Shaoshuai Li and Kaixin Gao and Binfeng Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=niG3Yyb6oA} }
Second-order optimization algorithms, such as the Newton method and the natural gradient descent (NGD) method exhibit excellent convergence properties for training deep neural networks, but the high computational cost limits its practical application. In this paper, we focus on the NGD method and propose a novel layer-wise natural gradient descent (LNGD) method to further reduce computational costs and accelerate the training process. Specifically, based on the block diagonal approximation of the Fisher information matrix, we first propose the layer-wise sample method to compute each block matrix without performing a complete back-propagation. Then, each block matrix is approximated as a Kronecker product of two smaller matrices, one of which is a diagonal matrix, while keeping the traces equal before and after approximation. By these two steps, we provide a new approximation for the Fisher information matrix, which can effectively reduce the computational cost while preserving the main information of each block matrix. Moreover, we propose a new adaptive layer-wise learning rate to further accelerate training. Based on these new approaches, we propose the LNGD optimizer. The global convergence analysis of LNGD is established under some assumptions. Experiments on image classification and machine translation tasks show that our method is quite competitive compared to the state-of-the-art methods.
A Layer-Wise Natural Gradient Optimizer for Training Deep Neural Networks
[ "Xiaolei Liu", "Shaoshuai Li", "Kaixin Gao", "Binfeng Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ni3Ud2BV3G
@inproceedings{ chen2024on, title={On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory}, author={Guhan Chen and Yicheng Li and Qian Lin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ni3Ud2BV3G} }
This paper aims to discuss the impact of random initialization of neural networks in the neural tangent kernel (NTK) theory, which is ignored by most recent works in the NTK theory. It is well known that as the network's width tends to infinity, the neural network with random initialization converges to a Gaussian process \(f^{\mathrm{GP}}\), which takes values in \(L^{2}(\mathcal{X})\), where \(\mathcal{X}\) is the domain of the data. In contrast, to adopt the traditional theory of kernel regression, most recent works introduced a special mirrored architecture and a mirrored (random) initialization to ensure the network's output is identically zero at initialization. Therefore, it remains a question whether the conventional setting and mirrored initialization would make wide neural networks exhibit different generalization capabilities. In this paper, we first show that the training dynamics of the gradient flow of neural networks with random initialization converge uniformly to that of the corresponding NTK regression with random initialization \(f^{\mathrm{GP}}\). We then show that \(\mathbf{P}(f^{\mathrm{GP}} \in [\mathcal{H}^{\mathrm{NT}}]^{s}) = 1\) for any \(s < \frac{3}{d+1}\) and \(\mathbf{P}(f^{\mathrm{GP}} \in [\mathcal{H}^{\mathrm{NT}}]^{s}) = 0\) for any \(s \geq \frac{3}{d+1}\), where \([\mathcal{H}^{\mathrm{NT}}]^{s}\) is the real interpolation space of the RKHS \(\mathcal{H}^{\mathrm{NT}}\) associated with the NTK. Consequently, the generalization error of the wide neural network trained by gradient descent is \(\Omega(n^{-\frac{3}{d+3}})\), and it still suffers from the curse of dimensionality. Thus, the NTK theory may not explain the superior performance of neural networks.
On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory
[ "Guhan Chen", "Yicheng Li", "Qian Lin" ]
NeurIPS.cc/2024/Conference
2410.05626
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nge5deRsEH
@inproceedings{ gan2024on, title={On the Power of Decision Trees in Auto-Regressive Language Modeling}, author={Yulu Gan and Tomer Galanti and Tomaso A Poggio and eran malach}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nge5deRsEH} }
Originally proposed for handling time series data, Auto-regressive Decision Trees (ARDTs) have not yet been explored for language modeling. This paper delves into both the theoretical and practical applications of ARDTs in this new context. We theoretically demonstrate that ARDTs can compute complex functions, such as simulating automata, Turing machines, and sparse circuits, by leveraging "chain-of-thought" computations. Our analysis provides bounds on the size, depth, and computational efficiency of ARDTs, highlighting their surprising computational power. Empirically, we train ARDTs on simple language generation tasks, showing that they can learn to generate coherent and grammatically correct text on par with a smaller Transformer model. Additionally, we show that ARDTs can be used on top of transformer representations to solve complex reasoning tasks. This research reveals the unique computational abilities of ARDTs, aiming to broaden the architectural diversity in language model development.
On the Power of Decision Trees in Auto-Regressive Language Modeling
[ "Yulu Gan", "Tomer Galanti", "Tomaso A Poggio", "eran malach" ]
NeurIPS.cc/2024/Conference
2409.19150
[ "" ]
https://huggingface.co/papers/2409.19150
2
4
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nfq3GKfb4h
@inproceedings{ peuter2024preference, title={Preference Learning of Latent Decision Utilities with a Human-like Model of Preferential Choice}, author={Sebastiaan De Peuter and Shibei Zhu and Yujia Guo and Andrew Howes and Samuel Kaski}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nfq3GKfb4h} }
Preference learning methods make use of models of human choice in order to infer the latent utilities that underlie human behaviour. However, accurate modeling of human choice behavior is challenging due to a range of context effects that arise from how humans contrast and evaluate options. Cognitive science has proposed several models that capture these intricacies but, due to their intractable nature, work on preference learning has, in practice, had to rely on tractable but simplified variants of the well-known Bradley-Terry model. In this paper, we take one state-of-the-art intractable cognitive model and propose a tractable surrogate that is suitable for deployment in preference learning. We then introduce a mechanism for fitting the surrogate to human data that cannot be explained by the original cognitive model. We demonstrate on large-scale human data that this model produces significantly better inferences on static and actively elicited data than existing Bradley-Terry variants. We further show in simulation that when using this model for preference learning, we can significantly improve a utility in a range of real-world tasks.
Preference Learning of Latent Decision Utilities with a Human-like Model of Preferential Choice
[ "Sebastiaan De Peuter", "Shibei Zhu", "Yujia Guo", "Andrew Howes", "Samuel Kaski" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nfK0ZXFFSn
@inproceedings{ du2024haloscope, title={HaloScope: Harnessing Unlabeled {LLM} Generations for Hallucination Detection}, author={Xuefeng Du and Chaowei Xiao and Yixuan Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nfK0ZXFFSn} }
The surge in applications of large language models (LLMs) has prompted concerns about the generation of misleading or fabricated information, known as hallucinations. Therefore, detecting hallucinations has become critical to maintaining trust in LLM-generated content. A primary challenge in learning a truthfulness classifier is the lack of a large amount of labeled truthful and hallucinated data. To address the challenge, we introduce HaloScope, a novel learning framework that leverages the unlabeled LLM generations in the wild for hallucination detection. Such unlabeled data arises freely upon deploying LLMs in the open world, and consists of both truthful and hallucinated information. To harness the unlabeled data, we present an automated scoring function for distinguishing between truthful and untruthful generations within unlabeled mixture data, thereby enabling the training of a binary classifier on top. Importantly, our framework does not require extra data collection and human annotations, offering strong flexibility and practicality for real-world applications. Extensive experiments show that HaloScope can achieve superior hallucination detection performance, outperforming the competitive rivals by a significant margin.
HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection
[ "Xuefeng Du", "Chaowei Xiao", "Yixuan Li" ]
NeurIPS.cc/2024/Conference
2409.17504
[ "" ]
https://huggingface.co/papers/2409.17504
1
0
0
3
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=ndoeHX1Acq
@inproceedings{ wang2024one, title={One for All: Multi-Domain Joint Training for Point Cloud Based 3D Object Detection}, author={Zhenyu Wang and Ya-Li Li and Hengshuang Zhao and Shengjin Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ndoeHX1Acq} }
The current trend in computer vision is to utilize one universal model to address all various tasks. Achieving such a universal model inevitably requires incorporating multi-domain data for joint training to learn across multiple problem scenarios. In point cloud based 3D object detection, however, such multi-domain joint training is highly challenging, because large domain gaps among point clouds from different datasets lead to the severe domain-interference problem. In this paper, we propose OneDet3D, a universal one-for-all model that addresses 3D detection across different domains, including diverse indoor and outdoor scenes, within the same framework and only one set of parameters. We propose the domain-aware partitioning in scatter and context, guided by a routing mechanism, to address the data interference issue, and further incorporate the text modality for a language-guided classification to unify the multi-dataset label spaces and mitigate the category interference issue. The fully sparse structure and anchor-free head further accommodate point clouds with significant scale disparities. Extensive experiments demonstrate the strong universal ability of OneDet3D to utilize only one trained model for addressing almost all 3D object detection tasks (Fig. 1). We will open-source the code for future research and applications.
One for All: Multi-Domain Joint Training for Point Cloud Based 3D Object Detection
[ "Zhenyu Wang", "Ya-Li Li", "Hengshuang Zhao", "Shengjin Wang" ]
NeurIPS.cc/2024/Conference
2411.01584
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nd8Q4a8aWl
@inproceedings{ kamkari2024a, title={A Geometric View of Data Complexity: Efficient Local Intrinsic Dimension Estimation with Diffusion Models}, author={Hamidreza Kamkari and Brendan Leigh Ross and Rasa Hosseinzadeh and Jesse C. Cresswell and Gabriel Loaiza-Ganem}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nd8Q4a8aWl} }
High-dimensional data commonly lies on low-dimensional submanifolds, and estimating the local intrinsic dimension (LID) of a datum -- i.e. the dimension of the submanifold it belongs to -- is a longstanding problem. LID can be understood as the number of local factors of variation: the more factors of variation a datum has, the more complex it tends to be. Estimating this quantity has proven useful in contexts ranging from generalization in neural networks to detection of out-of-distribution data, adversarial examples, and AI-generated text. The recent successes of deep generative models present an opportunity to leverage them for LID estimation, but current methods based on generative models produce inaccurate estimates, require more than a single pre-trained model, are computationally intensive, or do not exploit the best available deep generative models: diffusion models (DMs). In this work, we show that the Fokker-Planck equation associated with a DM can provide an LID estimator which addresses the aforementioned deficiencies. Our estimator, called FLIPD, is easy to implement and compatible with all popular DMs. Applying FLIPD to synthetic LID estimation benchmarks, we find that DMs implemented as fully-connected networks are highly effective LID estimators that outperform existing baselines. We also apply FLIPD to natural images where the true LID is unknown. Despite being sensitive to the choice of network architecture, FLIPD estimates remain a useful measure of relative complexity; compared to competing estimators, FLIPD exhibits a consistently higher correlation with image PNG compression rate and better aligns with qualitative assessments of complexity. Notably, FLIPD is orders of magnitude faster than other LID estimators, and the first to be tractable at the scale of Stable Diffusion.
A Geometric View of Data Complexity: Efficient Local Intrinsic Dimension Estimation with Diffusion Models
[ "Hamidreza Kamkari", "Brendan Leigh Ross", "Rasa Hosseinzadeh", "Jesse C. Cresswell", "Gabriel Loaiza-Ganem" ]
NeurIPS.cc/2024/Conference
2406.03537
[ "https://github.com/layer6ai-labs/dgm_geometry" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=ncqauwSyl5
@inproceedings{ wang2024neural, title={Neural P\${\textasciicircum}3\$M: A Long-Range Interaction Modeling Enhancer for Geometric {GNN}s}, author={Yusong Wang and Chaoran Cheng and Shaoning Li and Yuxuan Ren and Bin Shao and Ge Liu and Pheng-Ann Heng and Nanning Zheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ncqauwSyl5} }
Geometric graph neural networks (GNNs) have emerged as powerful tools for modeling molecular geometry. However, they encounter limitations in effectively capturing long-range interactions in large molecular systems. To address this challenge, we introduce **Neural P$^3$M**, a versatile enhancer of geometric GNNs to expand the scope of their capabilities by incorporating mesh points alongside atoms and reimaging traditional mathematical operations in a trainable manner. Neural P$^3$M exhibits flexibility across a wide range of molecular systems and demonstrates remarkable accuracy in predicting energies and forces, outperforming on benchmarks such as the MD22 dataset. It also achieves an average improvement of 22% on the OE62 dataset while integrating with various architectures. Codes are available at https://github.com/OnlyLoveKFC/Neural_P3M.
Neural P^3M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs
[ "Yusong Wang", "Chaoran Cheng", "Shaoning Li", "Yuxuan Ren", "Bin Shao", "Ge Liu", "Pheng-Ann Heng", "Nanning Zheng" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ncYGjx2vnE
@inproceedings{ behrouz2024chimera, title={Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models}, author={Ali Behrouz and Michele Santacatterina and Ramin Zabih}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ncYGjx2vnE} }
Modeling multivariate time series is a well-established problem with a wide range of applications from healthcare to financial markets. It, however, is challenging as it requires methods to (1) have high expressive power of representing complicated dependencies along the time axis to capture both long-term progression and seasonal patterns, (2) capture the inter-variate dependencies when it is informative, (3) dynamically model the dependencies of variate and time dimensions, and (4) have efficient training and inference for very long sequences. Traditional State Space Models (SSMs) are classical approaches for univariate time series modeling due to their simplicity and expressive power to represent linear dependencies. They, however, have fundamentally limited expressive power to capture non-linear dependencies, are slow in practice, and fail to model the inter-variate information flow. Despite recent attempts to improve the expressive power of SSMs by using deep structured SSMs, the existing methods are either limited to univariate time series, fail to model complex patterns (e.g., seasonal patterns), fail to dynamically model the dependencies of variate and time dimensions, and/or are input-independent. We present Chimera, an expressive variation of the 2-dimensional SSMs with careful design of parameters to maintain high expressive power while keeping the training complexity linear. Using two SSM heads with different discretization processes and input-dependent parameters, Chimera is provably able to learn long-term progression, seasonal patterns, and desirable dynamic autoregressive processes. To improve the efficiency of complex 2D recurrence, we present a fast training using a new 2-dimensional parallel selective scan. Our experimental evaluation shows the superior performance of Chimera on extensive and diverse benchmarks, including ECG and speech time series classification, long-term and short-term time series forecasting, and time series anomaly detection.
Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models
[ "Ali Behrouz", "Michele Santacatterina", "Ramin Zabih" ]
NeurIPS.cc/2024/Conference
2406.04320
[ "" ]
https://huggingface.co/papers/2406.04320
1
7
1
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nbqvjkOs6S
@inproceedings{ hong2024gradientfree, title={Gradient-free Decoder Inversion in Latent Diffusion Models}, author={Seongmin Hong and Suh Yoon Jeon and Kyeonghyun Lee and Ernest K. Ryu and Se Young Chun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nbqvjkOs6S} }
In latent diffusion models (LDMs), denoising diffusion process efficiently takes place on latent space whose dimension is lower than that of pixel space. Decoder is typically used to transform the representation in latent space to that in pixel space. While a decoder is assumed to have an encoder as an accurate inverse, exact encoder-decoder pair rarely exists in practice even though applications often require precise inversion of decoder. In other words, encoder is not the left-inverse but the right-inverse of the decoder; decoder inversion seeks the left-inverse. Prior works for decoder inversion in LDMs employed gradient descent inspired by inversions of generative adversarial networks. However, gradient-based methods require larger GPU memory and longer computation time for larger latent space. For example, recent video LDMs can generate more than 16 frames, but GPUs with 24 GB memory can only perform gradient-based decoder inversion for 4 frames. Here, we propose an efficient gradient-free decoder inversion for LDMs, which can be applied to diverse latent models. Theoretical convergence property of our proposed inversion has been investigated not only for the forward step method, but also for the inertial Krasnoselskii-Mann (KM) iterations under mild assumption on cocoercivity that is satisfied by recent LDMs. Our proposed gradient-free method with Adam optimizer and learning rate scheduling significantly reduced computation time and memory usage over prior gradient-based methods and enabled efficient computation in applications such as noise-space watermarking and background-preserving image editing while achieving comparable error levels.
Gradient-free Decoder Inversion in Latent Diffusion Models
[ "Seongmin Hong", "Suh Yoon Jeon", "Kyeonghyun Lee", "Ernest K. Ryu", "Se Young Chun" ]
NeurIPS.cc/2024/Conference
2409.18442
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nZB1FpXUU6
@inproceedings{ tan2024implicit, title={Implicit Curriculum in Procgen Made Explicit}, author={Zhenxiong Tan and Kaixin Wang and Xinchao Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nZB1FpXUU6} }
Procedurally generated environments such as Procgen Benchmark provide a testbed for evaluating the agent's ability to robustly learn a relevant skill, by situating the agent in ever-changing levels. The diverse levels associated with varying contexts are naturally connected to curriculum learning. Existing works mainly focus on arranging the levels to explicitly form a curriculum. In this work, we take a close look at the learning process itself under the multi-level training in Procgen. Interestingly, the learning process exhibits a gradual shift from easy contexts to hard contexts, suggesting an implicit curriculum in multi-level training. Our analysis is made possible through C-Procgen, a benchmark we build upon Procgen that enables explicit control of the contexts. We believe our findings will foster a deeper understanding of learning in diverse contexts, and our benchmark will benefit future research in curriculum reinforcement learning.
Implicit Curriculum in Procgen Made Explicit
[ "Zhenxiong Tan", "Kaixin Wang", "Xinchao Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=nY7fGtsspU
@inproceedings{ epping2024graph, title={Graph Neural Networks Do Not Always Oversmooth}, author={Bastian Epping and Alexandre Ren{\'e} and Moritz Helias and Michael T Schaub}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nY7fGtsspU} }
Graph neural networks (GNNs) have emerged as powerful tools for processing relational data in applications. However, GNNs suffer from the problem of oversmoothing, the property that features of all nodes exponentially converge to the same vector over layers, prohibiting the design of deep GNNs. In this work we study oversmoothing in graph convolutional networks (GCNs) by using their Gaussian process (GP) equivalence in the limit of infinitely many hidden features. By generalizing methods from conventional deep neural networks (DNNs), we can describe the distribution of features at the output layer of deep GCNs in terms of a GP: as expected, we find that typical parameter choices from the literature lead to oversmoothing. The theory, however, allows us to identify a new, non-oversmoothing phase: if the initial weights of the network have sufficiently large variance, GCNs do not oversmooth, and node features remain informative even at large depth. We demonstrate the validity of this prediction in finite-size GCNs by training a linear classifier on their output. Moreover, using the linearization of the GCN GP, we generalize the concept of propagation depth of information from DNNs to GCNs. This propagation depth diverges at the transition between the oversmoothing and non-oversmoothing phase. We test the predictions of our approach and find good agreement with finite-size GCNs. Initializing GCNs near the transition to the non-oversmoothing phase, we obtain networks which are both deep and expressive.
Graph Neural Networks Do Not Always Oversmooth
[ "Bastian Epping", "Alexandre René", "Moritz Helias", "Michael T Schaub" ]
NeurIPS.cc/2024/Conference
2406.02269
[ "https://github.com/bepping/non-oversmoothing-gcns" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nY0BrZdqLt
@inproceedings{ varun2024timereversal, title={Time-Reversal Provides Unsupervised Feedback to {LLM}s}, author={Yerram Varun and Rahul Madhavan and Sravanti Addepalli and Arun Suggala and Karthikeyan Shanmugam and Prateek Jain}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nY0BrZdqLt} }
Large Language Models (LLMs) are typically trained to predict in the forward direction of time. However, recent works have shown that prompting these models to look back and critique their own generations can produce useful feedback. Motivated by this, we explore the question of whether LLMs can be empowered to think (predict and score) backwards to provide unsupervised feedback that complements forward LLMs. Towards this, we introduce Time Reversed Language Models (TRLMs), which can score and generate queries when conditioned on responses, effectively functioning in the reverse direction of time. Further, to effectively infer in the response to query direction, we pre-train and fine-tune a language model (TRLM-Ba) in the reverse token order from scratch. We show empirically (and theoretically in a stylized setting) that time-reversed models can indeed complement forward model predictions when used to score the query given response for re-ranking multiple forward generations. We obtain up to 5\% improvement on the widely used AlpacaEval Leaderboard over the competent baseline of best-of-N re-ranking using self log-perplexity scores. We further show that TRLM scoring outperforms conventional forward scoring of response given query, resulting in significant gains in applications such as citation generation and passage retrieval. We next leverage the generative ability of TRLM to augment or provide unsupervised feedback to input safety filters of LLMs, demonstrating a drastic reduction in false negative rate with negligible impact on false positive rates against several attacks published on the popular JailbreakBench leaderboard.
Time-Reversal Provides Unsupervised Feedback to LLMs
[ "Yerram Varun", "Rahul Madhavan", "Sravanti Addepalli", "Arun Suggala", "Karthikeyan Shanmugam", "Prateek Jain" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=nXYedmTf1T
@inproceedings{ zhou2024calibrated, title={Calibrated Self-Rewarding Vision Language Models}, author={Yiyang Zhou and Zhiyuan Fan and Dongjie Cheng and Sihan Yang and Zhaorun Chen and Chenhang Cui and Xiyao Wang and Yun Li and Linjun Zhang and Huaxiu Yao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nXYedmTf1T} }
Large Vision-Language Models (LVLMs) have made substantial progress by integrating pre-trained large language models (LLMs) and vision models through instruction tuning. Despite these advancements, LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image, indicating a misalignment between image and text pairs. This misalignment arises because the model tends to prioritize textual information over visual input, even when both the language model and visual representations are of high quality. Existing methods leverage additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These approaches are resource-intensive and may not effectively reflect the target LVLM's preferences, making the curated preferences easily distinguishable. Our work addresses these challenges by proposing the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In the reward modeling, we employ a step-wise strategy and incorporate visual constraints into the self-rewarding process to place greater emphasis on visual input. Empirical results demonstrate that CSR significantly enhances performance and reduces hallucinations across twelve benchmarks and tasks, achieving substantial improvements over existing methods by 7.62\%. Our empirical results are further supported by rigorous theoretical analysis, under mild assumptions, verifying the effectiveness of introducing visual constraints into the self-rewarding paradigm. Additionally, CSR shows compatibility with different vision-language models and the ability to incrementally improve performance through iterative fine-tuning.
Calibrated Self-Rewarding Vision Language Models
[ "Yiyang Zhou", "Zhiyuan Fan", "Dongjie Cheng", "Sihan Yang", "Zhaorun Chen", "Chenhang Cui", "Xiyao Wang", "Yun Li", "Linjun Zhang", "Huaxiu Yao" ]
NeurIPS.cc/2024/Conference
2405.14622
[ "https://github.com/yiyangzhou/csr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nXXwYsARXB
@inproceedings{ singh2024a, title={A hierarchical decomposition for explaining {ML} performance discrepancies}, author={Harvineet Singh and Fan Xia and Adarsh Subbaswamy and Alexej Gossmann and Jean Feng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nXXwYsARXB} }
Machine learning (ML) algorithms can often differ in performance across domains. Understanding why their performance differs is crucial for determining what types of interventions (e.g., algorithmic or operational) are most effective at closing the performance gaps. Aggregate decompositions express the total performance gap as the gap due to a shift in the feature distribution $p(X)$ plus the gap due to a shift in the outcome's conditional distribution $p(Y|X)$. While this coarse explanation is helpful for guiding root cause analyses, it provides limited details and can only suggest coarse fixes involving all variables in an ML system. Detailed decompositions quantify the importance of each variable to each term in the aggregate decomposition, which can provide a deeper understanding and suggest more targeted interventions. Although parametric methods exist for conducting a full hierarchical decomposition of an algorithm's performance gap at the aggregate and detailed levels, current nonparametric methods only cover parts of the hierarchy; many also require knowledge of the entire causal graph. We introduce a nonparametric hierarchical framework for explaining why the performance of an ML algorithm differs across domains, without requiring causal knowledge. Furthermore, we derive debiased, computationally-efficient estimators and statistical inference procedures to construct confidence intervals for the explanations.
A hierarchical decomposition for explaining ML performance discrepancies
[ "Harvineet Singh", "Fan Xia", "Adarsh Subbaswamy", "Alexej Gossmann", "Jean Feng" ]
NeurIPS.cc/2024/Conference
2402.14254
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nXEzW3gVZ6
@inproceedings{ genans2024semidiscrete, title={Semi-Discrete Optimal Transport: Nearly Minimax Estimation With Stochastic Gradient Descent and Adaptive Entropic Regularization}, author={Ferdinand Genans and Antoine Godichon-Baggioni and Fran{\c{c}}ois-Xavier Vialard and Olivier Wintenberger}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nXEzW3gVZ6} }
Optimal Transport (OT) based distances are powerful tools for machine learning to compare probability measures and manipulate them using OT maps. In this field, a setting of interest is semi-discrete OT, where the source measure $\mu$ is continuous, while the target $\nu$ is discrete. Recent works have shown that the minimax rate for the OT map is $\mathcal{O}(t^{-1/2})$ when using $t$ i.i.d. subsamples from each measure (two-sample setting). An open question is whether a better convergence rate can be achieved when the full information of the discrete measure $\nu$ is known (one-sample setting). In this work, we answer positively to this question by (i) proving an $\mathcal{O}(t^{-1})$ lower bound rate for the OT map, using the similarity between Laguerre cells estimation and density support estimation, and (ii) proposing a Stochastic Gradient Descent (SGD) algorithm with adaptive entropic regularization and averaging acceleration. To nearly achieve the desired fast rate, characteristic of non-regular parametric problems, we design an entropic regularization scheme decreasing with the number of samples. Another key step in our algorithm consists of using a projection step that permits to leverage the local strong convexity of the regularized OT problem. Our convergence analysis integrates online convex optimization and stochastic gradient techniques, complemented by the specificities of the OT semi-dual. Moreover, while being as computationally and memory efficient as vanilla SGD, our algorithm achieves the unusual fast rates of our theory in numerical experiments.
Semi-Discrete Optimal Transport: Nearly Minimax Estimation With Stochastic Gradient Descent and Adaptive Entropic Regularization
[ "Ferdinand Genans", "Antoine Godichon-Baggioni", "François-Xavier Vialard", "Olivier Wintenberger" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nWMqQHzI3W
@inproceedings{ zhang2024seev, title={{SEEV}: Synthesis with Efficient Exact Verification for Re{LU} Neural Barrier Functions}, author={Hongchao Zhang and Zhizhen Qin and Sicun Gao and Andrew Clark}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nWMqQHzI3W} }
Neural Control Barrier Functions (NCBFs) have shown significant promise in enforcing safety constraints on nonlinear autonomous systems. State-of-the-art exact approaches to verifying safety of NCBF-based controllers exploit the piecewise-linear structure of ReLU neural networks, however, such approaches still rely on enumerating all of the activation regions of the network near the safety boundary, thus incurring high computation cost. In this paper, we propose a framework for Synthesis with Efficient Exact Verification (SEEV). Our framework consists of two components, namely (i) an NCBF synthesis algorithm that introduces a novel regularizer to reduce the number of activation regions at the safety boundary, and (ii) a verification algorithm that exploits tight over-approximations of the safety conditions to reduce the cost of verifying each piecewise-linear segment. Our simulations show that SEEV significantly improves verification efficiency while maintaining the CBF quality across various benchmark systems and neural network structures. Our code is available at https://github.com/HongchaoZhang-HZ/SEEV.
SEEV: Synthesis with Efficient Exact Verification for ReLU Neural Barrier Functions
[ "Hongchao Zhang", "Zhizhen Qin", "Sicun Gao", "Andrew Clark" ]
NeurIPS.cc/2024/Conference
2410.20326
[ "https://github.com/hongchaozhang-hz/seev" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nU4lvlMwrt
@inproceedings{ wang2024toward, title={Toward Real Ultra Image Segmentation: Leveraging Surrounding Context to Cultivate General Segmentation Model}, author={Sai Wang and Yutian Lin and Yu Wu and Bo Du}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nU4lvlMwrt} }
Existing ultra image segmentation methods suffer from two major challenges, namely the scalability issue (i.e. they lack the stability and generality of standard segmentation models, as they are tailored to specific datasets), and the architectural issue (i.e. they are incompatible with real-world ultra image scenes, as they compromise between image size and computing resources). To tackle these issues, we revisit the classic sliding inference framework, upon which we propose a Surrounding Guided Segmentation framework (SGNet) for ultra image segmentation. The SGNet leverages a larger area around each image patch to refine the general segmentation results of local patches. Specifically, we propose a surrounding context integration module to absorb surrounding context information and extract specific features that are beneficial to local patches. Note that, SGNet can be seamlessly integrated to any general segmentation model. Extensive experiments on five datasets demonstrate that SGNet achieves competitive performance and consistent improvements across a variety of general segmentation models, surpassing the traditional ultra image segmentation methods by a large margin.
Toward Real Ultra Image Segmentation: Leveraging Surrounding Context to Cultivate General Segmentation Model
[ "Sai Wang", "Yutian Lin", "Yu Wu", "Bo Du" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nTJeOXlWyV
@inproceedings{ cheng2024rtify, title={{RT}ify: Aligning Deep Neural Networks with Human Behavioral Decisions}, author={Yu-Ang Cheng and Ivan F Rodriguez Rodriguez and Sixuan Chen and Kohitij Kar and Takeo Watanabe and Thomas Serre}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nTJeOXlWyV} }
Current neural network models of primate vision focus on replicating overall levels of behavioral accuracy, often neglecting perceptual decisions' rich, dynamic nature. Here, we introduce a novel computational framework to model the dynamics of human behavioral choices by learning to align the temporal dynamics of a recurrent neural network (RNN) to human reaction times (RTs). We describe an approximation that allows us to constrain the number of time steps an RNN takes to solve a task with human RTs. The approach is extensively evaluated against various psychophysics experiments. We also show that the approximation can be used to optimize an ``ideal-observer'' RNN model to achieve an optimal tradeoff between speed and accuracy without human data. The resulting model is found to account well for human RT data. Finally, we use the approximation to train a deep learning implementation of the popular Wong-Wang decision-making model. The model is integrated with a convolutional neural network (CNN) model of visual processing and evaluated using both artificial and natural image stimuli. Overall, we present a novel framework that helps align current vision models with human behavior, bringing us closer to an integrated model of human vision.
RTify: Aligning Deep Neural Networks with Human Behavioral Decisions
[ "Yu-Ang Cheng", "Ivan F Rodriguez Rodriguez", "Sixuan Chen", "Kohitij Kar", "Takeo Watanabe", "Thomas Serre" ]
NeurIPS.cc/2024/Conference
2411.03630
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nRp0XhTf61
@inproceedings{ dong2024internlmxcomposerkhd, title={Intern{LM}-{XC}omposer2-4{KHD}: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K {HD}}, author={Xiaoyi Dong and Pan Zhang and Yuhang Zang and Yuhang Cao and Bin Wang and Linke Ouyang and Songyang Zhang and Haodong Duan and Wenwei Zhang and Yining Li and Hang Yan and Yang Gao and Zhe Chen and xinyue zhang and Wei Li and Li Jingwen and Wenhai Wang and Kai Chen and Conghui He and Xingcheng ZHANG and Jifeng Dai and Yu Qiao and Dahua Lin and Jiaqi Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nRp0XhTf61} }
The Large Vision-Language Model (LVLM) field has seen significant advancements, yet its progression has been hindered by challenges in comprehending fine-grained visual content due to limited resolution. Recent efforts have aimed to enhance the high-resolution understanding capabilities of LVLMs, yet they remain capped at approximately 1500 $\times$ 1500 pixels and constrained to a relatively narrow resolution range. This paper represents InternLM-XComposer2-4KHD, a groundbreaking exploration into elevating LVLM resolution capabilities up to 4K HD (3840 × 1600) and beyond. Concurrently, considering the ultra-high resolution may not be necessary in all scenarios, it supports a wide range of diverse resolutions from 336 pixels to 4K standard, significantly broadening its scope of applicability. Specifically, this research advances the patch division paradigm by introducing a novel extension: dynamic resolution with automatic patch configuration. It maintains the training image aspect ratios while automatically varying patch counts and configuring layouts based on a pre-trained Vision Transformer (ViT) (336 $\times$ 336), leading to dynamic training resolution from 336 pixels to 4K standard. Our research demonstrates that scaling training resolution up to 4K HD leads to consistent performance enhancements without hitting the ceiling of potential improvements. InternLM-XComposer2-4KHD shows superb capability that matches or even surpasses GPT-4V and Gemini Pro in 10 of the 16 benchmarks.
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
[ "Xiaoyi Dong", "Pan Zhang", "Yuhang Zang", "Yuhang Cao", "Bin Wang", "Linke Ouyang", "Songyang Zhang", "Haodong Duan", "Wenwei Zhang", "Yining Li", "Hang Yan", "Yang Gao", "Zhe Chen", "xinyue zhang", "Wei Li", "Li Jingwen", "Wenhai Wang", "Kai Chen", "Conghui He", "Xingcheng ZHANG", "Jifeng Dai", "Yu Qiao", "Dahua Lin", "Jiaqi Wang" ]
NeurIPS.cc/2024/Conference
2404.06512
[ "https://github.com/internlm/internlm-xcomposer" ]
https://huggingface.co/papers/2404.06512
9
29
1
24
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nRdST1qifJ
@inproceedings{ mo2024fight, title={Fight Back Against Jailbreaking via Prompt Adversarial Tuning}, author={Yichuan Mo and Yuji Wang and Zeming Wei and Yisen Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nRdST1qifJ} }
While Large Language Models (LLMs) have achieved tremendous success in various applications, they are also susceptible to jailbreaking attacks. Several primary defense strategies have been proposed to protect LLMs from producing harmful information, mostly focusing on model fine-tuning or heuristical defense designs. However, how to achieve intrinsic robustness through prompt optimization remains an open problem. In this paper, motivated by adversarial training paradigms for achieving reliable robustness, we propose an approach named **Prompt Adversarial Tuning (PAT)** that trains a prompt control attached to the user prompt as a guard prefix. To achieve our defense goal whilst maintaining natural performance, we optimize the control prompt with both adversarial and benign prompts. Comprehensive experiments show that our method is effective against both grey-box and black-box attacks, reducing the success rate of advanced attacks to nearly 0, while maintaining the model's utility on the benign task and incurring only negligible computational overhead, charting a new perspective for future explorations in LLM security. Our code is available at https://github.com/PKU-ML/PAT.
Fight Back Against Jailbreaking via Prompt Adversarial Tuning
[ "Yichuan Mo", "Yuji Wang", "Zeming Wei", "Yisen Wang" ]
NeurIPS.cc/2024/Conference
2402.06255
[ "https://github.com/rain152/PAT" ]
https://huggingface.co/papers/2402.06255
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nRRJsDahEg
@inproceedings{ zhang2024towards, title={Towards a ''Universal Translator'' for Neural Dynamics at Single-Cell, Single-Spike Resolution}, author={Yizi Zhang and Yanchen Wang and Donato M. Jim{\'e}nez-Benet{\'o} and Zixuan Wang and Mehdi Azabou and Blake Aaron Richards and Renee Tung and Olivier Winter and International Brain Laboratory and Eva L Dyer and Liam Paninski and Cole Lincoln Hurwitz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nRRJsDahEg} }
Neuroscience research has made immense progress over the last decade, but our understanding of the brain remains fragmented and piecemeal: the dream of probing an arbitrary brain region and automatically reading out the information encoded in its neural activity remains out of reach. In this work, we build towards a first foundation model for neural spiking data that can solve a diverse set of tasks across multiple brain areas. We introduce a novel self-supervised modeling approach for population activity in which the model alternates between masking out and reconstructing neural activity across different time steps, neurons, and brain regions. To evaluate our approach, we design unsupervised and supervised prediction tasks using the International Brain Laboratory repeated site dataset, which is comprised of Neuropixels recordings targeting the same brain locations across 48 animals and experimental sessions. The prediction tasks include single-neuron and region-level activity prediction, forward prediction, and behavior decoding. We demonstrate that our multi-task-masking (MtM) approach significantly improves the performance of current state-of-the-art population models and enables multi-task learning. We also show that by training on multiple animals, we can improve the generalization ability of the model to unseen animals, paving the way for a foundation model of the brain at single-cell, single-spike resolution.
Towards a "Universal Translator" for Neural Dynamics at Single-Cell, Single-Spike Resolution
[ "Yizi Zhang", "Yanchen Wang", "Donato M. Jiménez-Benetó", "Zixuan Wang", "Mehdi Azabou", "Blake Aaron Richards", "Renee Tung", "Olivier Winter", "International Brain Laboratory", "Eva L Dyer", "Liam Paninski", "Cole Lincoln Hurwitz" ]
NeurIPS.cc/2024/Conference
2407.14668
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nQl8EjyMzh
@inproceedings{ shysheya2024on, title={On conditional diffusion models for {PDE} simulations}, author={Aliaksandra Shysheya and Cristiana Diaconu and Federico Bergamin and Paris Perdikaris and Jos{\'e} Miguel Hern{\'a}ndez-Lobato and Richard E. Turner and Emile Mathieu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nQl8EjyMzh} }
Modelling partial differential equations (PDEs) is of crucial importance in science and engineering, and it includes tasks ranging from forecasting to inverse problems, such as data assimilation. However, most previous numerical and machine learning approaches that target forecasting cannot be applied out-of-the-box for data assimilation. Recently, diffusion models have emerged as a powerful tool for conditional generation, being able to flexibly incorporate observations without retraining. In this work, we perform a comparative study of score-based diffusion models for forecasting and assimilation of sparse observations. In particular, we focus on diffusion models that are either trained in a conditional manner, or conditioned after unconditional training. We address the shortcomings of existing models by proposing 1) an autoregressive sampling approach, that significantly improves performance in forecasting, 2) a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths, and 3) a hybrid model which employs flexible pre-training conditioning on initial conditions and flexible post-training conditioning to handle data assimilation. We empirically show that these modifications are crucial for successfully tackling the combination of forecasting and data assimilation, a task commonly encountered in real-world scenarios.
On conditional diffusion models for PDE simulations
[ "Aliaksandra Shysheya", "Cristiana Diaconu", "Federico Bergamin", "Paris Perdikaris", "José Miguel Hernández-Lobato", "Richard E. Turner", "Emile Mathieu" ]
NeurIPS.cc/2024/Conference
2410.16415
[ "https://github.com/cambridge-mlg/pdediff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nN6NSd1Qds
@inproceedings{ kataria2024ugc, title={{UGC}: Universal Graph Coarsening}, author={Mohit Kataria and Sandeep Kumar and Jayadeva Jayadeva}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nN6NSd1Qds} }
In the era of big data, graphs have emerged as a natural representation of intricate relationships. However, graph sizes often become unwieldy, leading to storage, computation, and analysis challenges. A crucial demand arises for methods that can effectively downsize large graphs while retaining vital insights. Graph coarsening seeks to simplify large graphs while maintaining the basic statistics of the graphs, such as spectral properties and $\epsilon$-similarity in the coarsened graph. This ensures that downstream processes are more efficient and effective. Most published methods are suitable for homophilic datasets, limiting their universal use. We propose **U**niversal **G**raph **C**oarsening (UGC), a framework equally suitable for homophilic and heterophilic datasets. UGC integrates node attributes and adjacency information, leveraging the dataset's heterophily factor. Results on benchmark datasets demonstrate that UGC preserves spectral similarity while coarsening. In comparison to existing methods, UGC is 4x to 15x faster, has lower eigen-error, and yields superior performance on downstream processing tasks even at 70% coarsening ratios.
UGC: Universal Graph Coarsening
[ "Mohit Kataria", "Sandeep Kumar", "Jayadeva Jayadeva" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nLSLbJgL7f
@inproceedings{ zhao2024to, title={To Err Like Human: Affective Bias-Inspired Measures for Visual Emotion Recognition Evaluation}, author={Chenxi Zhao and Jinglei Shi and Liqiang Nie and Jufeng Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nLSLbJgL7f} }
Accuracy is a commonly adopted performance metric in various classification tasks, which measures the proportion of correctly classified samples among all samples. It assumes equal importance for all classes, hence equal severity for misclassifications. However, in the task of emotional classification, due to the psychological similarities between emotions, misclassifying a certain emotion into one class may be more severe than another, e.g., misclassifying 'excitement' as 'anger' apparently is more severe than as 'awe'. Albeit high meaningful for many applications, metrics capable of measuring these cases of misclassifications in visual emotion recognition tasks have yet to be explored. In this paper, based on Mikel's emotion wheel from psychology, we propose a novel approach for evaluating the performance in visual emotion recognition, which takes into account the distance on the emotion wheel between different emotions to mimic the psychological nuances of emotions. Experimental results in semi-supervised learning on emotion recognition and user study have shown that our proposed metrics is more effective than the accuracy to assess the performance and conforms to the cognitive laws of human emotions. The code is available at https://github.com/ZhaoChenxi-nku/ECC.
To Err Like Human: Affective Bias-Inspired Measures for Visual Emotion Recognition Evaluation
[ "Chenxi Zhao", "Jinglei Shi", "Liqiang Nie", "Jufeng Yang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nLQeE8QGGe
@inproceedings{ wagenmaker2024active, title={Active design of two-photon holographic stimulation for identifying neural population dynamics}, author={Andrew Wagenmaker and Lu Mi and Marton Rozsa and Matthew Storm Bull and Karel Svoboda and Kayvon Daie and Matthew D. Golub and Kevin Jamieson}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nLQeE8QGGe} }
Recent advances in techniques for monitoring and perturbing neural populations have greatly enhanced our ability to study circuits in the brain. In particular, two-photon holographic optogenetics now enables precise photostimulation of experimenter-specified groups of individual neurons, while simultaneous two-photon calcium imaging enables the measurement of ongoing and induced activity across the neural population. Despite the enormous space of potential photostimulation patterns and the time-consuming nature of photostimulation experiments, very little algorithmic work has been done to determine the most effective photostimulation patterns for identifying the neural population dynamics. Here, we develop methods to efficiently select which neurons to stimulate such that the resulting neural responses will best inform a dynamical model of the neural population activity. Using neural population responses to photostimulation in mouse motor cortex, we demonstrate the efficacy of a low-rank linear dynamical systems model, and develop an active learning procedure which takes advantage of low-rank structure to determine informative photostimulation patterns. We demonstrate our approach on both real and synthetic data, obtaining in some cases as much as a two-fold reduction in the amount of data required to reach a given predictive power. Our active stimulation design method is based on a novel active learning procedure for low-rank regression, which may be of independent interest.
Active design of two-photon holographic stimulation for identifying neural population dynamics
[ "Andrew Wagenmaker", "Lu Mi", "Marton Rozsa", "Matthew Storm Bull", "Karel Svoboda", "Kayvon Daie", "Matthew D. Golub", "Kevin Jamieson" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nK6OnCpd3n
@inproceedings{ luo2024textaware, title={Text-Aware Diffusion for Policy Learning}, author={Calvin Luo and Mandy He and Zilai Zeng and Chen Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nK6OnCpd3n} }
Training an agent to achieve particular goals or perform desired behaviors is often accomplished through reinforcement learning, especially in the absence of expert demonstrations. However, supporting novel goals or behaviors through reinforcement learning requires the ad-hoc design of appropriate reward functions, which quickly becomes intractable. To address this challenge, we propose Text-Aware Diffusion for Policy Learning (TADPoLe), which uses a pretrained, frozen text-conditioned diffusion model to compute dense zero-shot reward signals for text-aligned policy learning. We hypothesize that large-scale pretrained generative models encode rich priors that can supervise a policy to behave not only in a text-aligned manner, but also in alignment with a notion of naturalness summarized from internet-scale training data. In our experiments, we demonstrate that TADPoLe is able to learn policies for novel goal-achievement and continuous locomotion behaviors specified by natural language, in both Humanoid and Dog environments. The behaviors are learned zero-shot without ground-truth rewards or expert demonstrations, and are qualitatively more natural according to human evaluation. We further show that TADPoLe performs competitively when applied to robotic manipulation tasks in the Meta-World environment, without having access to any in-domain demonstrations.
Text-Aware Diffusion for Policy Learning
[ "Calvin Luo", "Mandy He", "Zilai Zeng", "Chen Sun" ]
NeurIPS.cc/2024/Conference
2407.01903
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nJvkQSu9Z5
@inproceedings{ mcmahan2024shared, title={Shared Autonomy with {IDA}: Interventional Diffusion Assistance}, author={Brandon J McMahan and Zhenghao Peng and Bolei Zhou and Jonathan Kao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nJvkQSu9Z5} }
The rapid development of artificial intelligence (AI) has unearthed the potential to assist humans in controlling advanced technologies. Shared autonomy (SA) facilitates control by combining inputs from a human pilot and an AI copilot. In prior SA studies, the copilot is constantly active in determining the action played at each time step. This limits human autonomy that may have deleterious effects on performance. In general, the amount of helpful copilot assistance varies greatly depending on the task dynamics. We therefore hypothesized that human autonomy and SA performance improves through dynamic and selective copilot intervention. To address this, we develop a goal-agnostic intervention assistance (IA) that dynamically shares control by having the copilot intervene only when the expected value of the copilot’s action exceeds that of the human’s action. We implement IA with a diffusion copilot (termed IDA) trained on expert demonstrations with goal masking. We prove that IDA performance is lower bounded by human performance, so that IDA does not negatively impact human control. In experiments with simulated human pilots, we show that IDA achieves higher performance than both pilot-only and traditional SA control in variants of the Reacher environment and Lunar Lander. We then demonstrate with human-in the-loop experiments that IDA achieves better control in Lunar Lander and that human participants experience greater autonomy and prefer IDA over pilot-only and traditional SA control. We attribute the success of IDA to preserving human autonomy while simultaneously offering assistance to prevent the human from entering universally bad states.
Shared Autonomy with IDA: Interventional Diffusion Assistance
[ "Brandon J McMahan", "Zhenghao Peng", "Bolei Zhou", "Jonathan Kao" ]
NeurIPS.cc/2024/Conference
2409.15317
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nJKfNiEBvq
@inproceedings{ lin2024learning, title={Learning the Latent Causal Structure for Modeling Label Noise}, author={Yexiong Lin and Yu Yao and Tongliang Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nJKfNiEBvq} }
In label-noise learning, the noise transition matrix reveals how an instance transitions from its clean label to its noisy label. Accurately estimating an instance's noise transition matrix is crucial for estimating its clean label. However, when only a noisy dataset is available, noise transition matrices can be estimated only for some "special" instances. To leverage these estimated transition matrices to help estimate the transition matrices of other instances, it is essential to explore relations between the matrices of these "special" instances and those of others. Existing studies typically build the relation by explicitly defining the similarity between the estimated noise transition matrices of "special" instances and those of other instances. However, these similarity-based assumptions are hard to validate and may not align with real-world data. If these assumptions fail, both noise transition matrices and clean labels cannot be accurately estimated. In this paper, we found that by learning the latent causal structure governing the generating process of noisy data, we can estimate noise transition matrices without the need for similarity-based assumptions. Unlike previous generative label-noise learning methods, we consider causal relations between latent causal variables and model them with a learnable graphical model. Utilizing only noisy data, our method can effectively learn the latent causal structure. Experimental results on various noisy datasets demonstrate that our method achieves state-of-the-art performance in estimating noise transition matrices, which leads to improved classification accuracy. The code is available at: https://github.com/tmllab/2024_NeurIPS_CSGN.
Learning the Latent Causal Structure for Modeling Label Noise
[ "Yexiong Lin", "Yu Yao", "Tongliang Liu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nIeufGuQ9x
@inproceedings{ zhang2024diffsf, title={Diff{SF}: Diffusion Models for Scene Flow Estimation}, author={Yushan Zhang and Bastian Wandt and Maria Magnusson and Michael Felsberg}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nIeufGuQ9x} }
Scene flow estimation is an essential ingredient for a variety of real-world applications, especially for autonomous agents, such as self-driving cars and robots. While recent scene flow estimation approaches achieve reasonable accuracy, their applicability to real-world systems additionally benefits from a reliability measure. Aiming at improving accuracy while additionally providing an estimate for uncertainty, we propose DiffSF that combines transformer-based scene flow estimation with denoising diffusion models. In the diffusion process, the ground truth scene flow vector field is gradually perturbed by adding Gaussian noise. In the reverse process, starting from randomly sampled Gaussian noise, the scene flow vector field prediction is recovered by conditioning on a source and a target point cloud. We show that the diffusion process greatly increases the robustness of predictions compared to prior approaches resulting in state-of-the-art performance on standard scene flow estimation benchmarks. Moreover, by sampling multiple times with different initial states, the denoising process predicts multiple hypotheses, which enables measuring the output uncertainty, allowing our approach to detect a majority of the inaccurate predictions. The code is available at https://github.com/ZhangYushan3/DiffSF.
DiffSF: Diffusion Models for Scene Flow Estimation
[ "Yushan Zhang", "Bastian Wandt", "Maria Magnusson", "Michael Felsberg" ]
NeurIPS.cc/2024/Conference
2403.05327
[ "https://github.com/zhangyushan3/diffsf" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=nF34qXcY0b
@inproceedings{ musavi2024identification, title={Identification of Analytic Nonlinear Dynamical Systems with Non-asymptotic Guarantees}, author={Negin Musavi and Ziyao Guo and Geir Dullerud and Yingying Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nF34qXcY0b} }
This paper focuses on the system identification of an important class of nonlinear systems: linearly parameterized nonlinear systems, which enjoys wide applications in robotics and other mechanical systems. We consider two system identification methods: least-squares estimation (LSE), which is a point estimation method; and set-membership estimation (SME), which estimates an uncertainty set that contains the true parameters. We provide non-asymptotic convergence rates for LSE and SME under i.i.d. control inputs and control policies with i.i.d. random perturbations, both of which are considered as non-active-exploration inputs. Compared with the counter-example based on piecewise-affine systems in the literature, the success of non-active exploration in our setting relies on a key assumption on the system dynamics: we require the system functions to be real-analytic. Our results, together with the piecewise-affine counter-example, reveal the importance of differentiability in nonlinear system identification through non-active exploration. Lastly, we numerically compare our theoretical bounds with the empirical performance of LSE and SME on a pendulum example and a quadrotor example.
Identification of Analytic Nonlinear Dynamical Systems with Non-asymptotic Guarantees
[ "Negin Musavi", "Ziyao Guo", "Geir Dullerud", "Yingying Li" ]
NeurIPS.cc/2024/Conference
2411.00656
[ "https://github.com/NeginMusavi/real-analytic-nonlinear-sys-id" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nExI4FuKWD
@inproceedings{ jing2024fineclip, title={Fine{CLIP}: Self-distilled Region-based {CLIP} for Better Fine-grained Understanding}, author={Dong Jing and Xiaolong He and Yutian Luo and Nanyi Fei and Guoxing Yang and Wei Wei and Huiwen Zhao and Zhiwu Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nExI4FuKWD} }
Contrastive Language-Image Pre-training (CLIP) achieves impressive performance on tasks like image classification and image-text retrieval by learning on large-scale image-text datasets. However, CLIP struggles with dense prediction tasks due to the poor grasp of the fine-grained details. Although existing works pay attention to this issue, they achieve limited improvements and usually sacrifice the important visual-semantic consistency. To overcome these limitations, we propose FineCLIP, which keeps the global contrastive learning to preserve the visual-semantic consistency and further enhances the fine-grained understanding through two innovations: 1) A real-time self-distillation scheme that facilitates the transfer of representation capability from global to local features. 2) A semantically-rich regional contrastive learning paradigm with generated region-text pairs, boosting the local representation capabilities with abundant fine-grained knowledge. Both cooperate to fully leverage diverse semantics and multi-grained complementary information. To validate the superiority of our FineCLIP and the rationality of each design, we conduct extensive experiments on challenging dense prediction and image-level tasks. All the observations demonstrate the effectiveness of FineCLIP.
FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding
[ "Dong Jing", "Xiaolong He", "Yutian Luo", "Nanyi Fei", "Guoxing Yang", "Wei Wei", "Huiwen Zhao", "Zhiwu Lu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nEqU0iCa0s
@inproceedings{ li2024selfdistilled, title={Self-Distilled Depth Refinement with Noisy Poisson Fusion}, author={Jiaqi Li and Yiran Wang and Jinghong Zheng and Zihao Huang and Ke Xian and Zhiguo Cao and Jianming Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nEqU0iCa0s} }
Depth refinement aims to infer high-resolution depth with fine-grained edges and details, refining low-resolution results of depth estimation models. The prevailing methods adopt tile-based manners by merging numerous patches, which lacks efficiency and produces inconsistency. Besides, prior arts suffer from fuzzy depth boundaries and limited generalizability. Analyzing the fundamental reasons for these limitations, we model depth refinement as a noisy Poisson fusion problem with local inconsistency and edge deformation noises. We propose the Self-distilled Depth Refinement (SDDR) framework to enforce robustness against the noises, which mainly consists of depth edge representation and edge-based guidance. With noisy depth predictions as input, SDDR generates low-noise depth edge representations as pseudo-labels by coarse-to-fine self-distillation. Edge-based guidance with edge-guided gradient loss and edge-based fusion loss serves as the optimization objective equivalent to Poisson fusion. When depth maps are better refined, the labels also become more noise-free. Our model can acquire strong robustness to the noises, achieving significant improvements in accuracy, edge quality, efficiency, and generalizability on five different benchmarks. Moreover, directly training another model with edge labels produced by SDDR brings improvements, suggesting that our method could help with training robust refinement models in future works.
Self-Distilled Depth Refinement with Noisy Poisson Fusion
[ "Jiaqi Li", "Yiran Wang", "Jinghong Zheng", "Zihao Huang", "Ke Xian", "Zhiguo Cao", "Jianming Zhang" ]
NeurIPS.cc/2024/Conference
2409.17880
[ "https://github.com/lijia7/sddr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nBrnfYeKf9
@inproceedings{ guo2024radnerf, title={Rad-Ne{RF}: Ray-decoupled Training of Neural Radiance Field}, author={Lidong Guo and Xuefei Ning and Yonggan Fu and Tianchen Zhao and Zhuoliang Kang and Jincheng Yu and Yingyan Celine Lin and Yu Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nBrnfYeKf9} }
Although the neural radiance field (NeRF) exhibits high-fidelity visualization on the rendering task, it still suffers from rendering defects, especially in complex scenes. In this paper, we delve into the reason for the unsatisfactory performance and conjecture that it comes from interference in the training process. Due to occlusions in complex scenes, a 3D point may be invisible to some rays. On such a point, training with those rays that do not contain valid information about the point might interfere with the NeRF training. Based on the above intuition, we decouple the training process of NeRF in the ray dimension softly and propose a Ray-decoupled Training Framework for neural rendering (Rad-NeRF). Specifically, we construct an ensemble of sub-NeRFs and train a soft gate module to assign the gating scores to these sub-NeRFs based on specific rays. The gate module is jointly optimized with the sub-NeRF ensemble to learn the preference of sub-NeRFs for different rays automatically. Furthermore, we introduce depth-based mutual learning to enhance the rendering consistency among multiple sub-NeRFs and mitigate the depth ambiguity. Experiments on five datasets demonstrate that Rad-NeRF can enhance the rendering performance across a wide range of scene types compared with existing single-NeRF and multi-NeRF methods. With only 0.2% extra parameters, Rad-NeRF improves rendering performance by up to 1.5dB. Code is available at https://github.com/thu-nics/Rad-NeRF.
Rad-NeRF: Ray-decoupled Training of Neural Radiance Field
[ "Lidong Guo", "Xuefei Ning", "Yonggan Fu", "Tianchen Zhao", "Zhuoliang Kang", "Jincheng Yu", "Yingyan Celine Lin", "Yu Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nBjmMF2IZU
@inproceedings{ zhai2024finetuning, title={Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning}, author={Yuexiang Zhai and Hao Bai and Zipeng Lin and Jiayi Pan and Shengbang Tong and Yifei Zhou and Alane Suhr and Saining Xie and Yann LeCun and Yi Ma and Sergey Levine}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nBjmMF2IZU} }
Large vision-language models (VLMs) fine-tuned on specialized visual instruction-following data have exhibited impressive language reasoning capabilities across various scenarios. However, this fine-tuning paradigm may not be able to efficiently learn optimal decision-making agents in multi-step goal-directed tasks from interactive environments. To address this challenge, we propose an algorithmic framework that fine-tunes VLMs with reinforcement learning (RL). Specifically, our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning, enabling the VLM to efficiently explore intermediate reasoning steps that lead to the final text-based action. Next, the open-ended text output is parsed into an executable action to interact with the environment to obtain goal-directed task rewards. Finally, our framework uses these task rewards to fine-tune the entire VLM with RL. Empirically, we demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks, enabling 7b models to outperform commercial models such as GPT4-V or Gemini. Furthermore, we find that CoT reasoning is a crucial component for performance improvement, as removing the CoT reasoning results in a significant decrease in the overall performance of our method.
Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning
[ "Yuexiang Zhai", "Hao Bai", "Zipeng Lin", "Jiayi Pan", "Shengbang Tong", "Yifei Zhou", "Alane Suhr", "Saining Xie", "Yann LeCun", "Yi Ma", "Sergey Levine" ]
NeurIPS.cc/2024/Conference
2405.10292
[ "" ]
https://huggingface.co/papers/2405.10292
0
1
0
11
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nBhfIcDnRP
@inproceedings{ chai2024efficient, title={Efficient Graph Matching for Correlated Stochastic Block Models}, author={Shuwen Chai and Miklos Z. Racz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nBhfIcDnRP} }
We study learning problems on correlated stochastic block models with two balanced communities. Our main result gives the first efficient algorithm for graph matching in this setting. In the most interesting regime where the average degree is logarithmic in the number of vertices, this algorithm correctly matches all but a vanishing fraction of vertices with high probability, whenever the edge correlation parameter $s$ satisfies $s^2 > \alpha \approx 0.338$, where $\alpha$ is Otter's tree-counting constant. Moreover, we extend this to an efficient algorithm for exact graph matching whenever this is information-theoretically possible, positively resolving an open problem of Rácz and Sridhar (NeurIPS 2021). Our algorithm generalizes the recent breakthrough work of Mao, Wu, Xu, and Yu (STOC 2023), which is based on centered subgraph counts of a large family of trees termed chandeliers. A major technical challenge that we overcome is dealing with the additional estimation errors that are necessarily present due to the fact that, in relevant parameter regimes, the latent community partition cannot be exactly recovered from a single graph. As an application of our results, we give an efficient algorithm for exact community recovery using multiple correlated graphs in parameter regimes where it is information-theoretically impossible to do so using just a single graph.
Efficient Graph Matching for Correlated Stochastic Block Models
[ "Shuwen Chai", "Miklos Z. Racz" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nBQHTBVnfr
@inproceedings{ shinde2024geometric, title={Geometric Analysis of Nonlinear Manifold Clustering}, author={Nimita Shinde and Tianjiao Ding and Daniel Robinson and Rene Vidal}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nBQHTBVnfr} }
Manifold clustering is an important problem in motion and video segmentation, natural image clustering, and other applications where high-dimensional data lie on multiple, low-dimensional, nonlinear manifolds. While current state-of-the-art methods on large-scale datasets such as CIFAR provide good empirical performance, they do not have any proof of theoretical correctness. In this work, we propose a method that clusters data belonging to a union of nonlinear manifolds. Furthermore, for a given input data sample $y$ belonging to the $l$th manifold $\mathcal{M}_l$, we provide geometric conditions that guarantee a manifold-preserving representation of $y$ can be recovered from the solution to the proposed model. The geometric conditions require that (i) $\mathcal{M}_l$ is well-sampled in the neighborhood of $y$, with the sampling density given as a function of the curvature, and (ii) $\mathcal{M}_l$ is sufficiently separated from the other manifolds. In addition to providing proof of correctness in this setting, a numerical comparison with state-of-the-art methods on CIFAR datasets shows that our method performs competitively although marginally worse than methods without
Geometric Analysis of Nonlinear Manifold Clustering
[ "Nimita Shinde", "Tianjiao Ding", "Daniel Robinson", "Rene Vidal" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nBOdYBptWW
@inproceedings{ gao2024units, title={Uni{TS}: A Unified Multi-Task Time Series Model}, author={Shanghua Gao and Teddy Koker and Owen Queen and Thomas Hartvigsen and Theodoros Tsiligkaridis and Marinka Zitnik}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nBOdYBptWW} }
Although pre-trained transformers and reprogrammed text-based LLMs have shown strong performance on time series tasks, the best-performing architectures vary widely across tasks, with most models narrowly focused on specific areas, such as time series forecasting. Unifying predictive and generative time series tasks within a single model remains challenging. We introduce UniTS, a unified multi-task time series model that utilizes task tokenization to integrate predictive and generative tasks into a single framework. UniTS employs a modified transformer block to capture universal time series representations, enabling transferability from a heterogeneous, multi-domain pre-training dataset—characterized by diverse dynamic patterns, sampling rates, and temporal scales—to a wide range of downstream datasets with varied task specifications and data domains. Tested on 38 datasets across human activity sensors, healthcare, engineering, and finance, UniTS achieves superior performance compared to 12 forecasting models, 20 classification models, 18 anomaly detection models, and 16 imputation models, including adapted text-based LLMs. UniTS also demonstrates strong few-shot and prompt capabilities when applied to new domains and tasks. In single-task settings, UniTS outperforms competitive task-specialized time series models. Code and datasets are available at https://github.com/mims-harvard/UniTS.
UniTS: A Unified Multi-Task Time Series Model
[ "Shanghua Gao", "Teddy Koker", "Owen Queen", "Thomas Hartvigsen", "Theodoros Tsiligkaridis", "Marinka Zitnik" ]
NeurIPS.cc/2024/Conference
2403.00131
[ "https://github.com/mims-harvard/UniTS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nAnEStxyfy
@inproceedings{ wagner2024generating, title={Generating Highly Designable Proteins with Geometric Algebra Flow Matching}, author={Simon Wagner and Leif Seute and Vsevolod Viliuga and Nicolas Wolf and Frauke Gr{\"a}ter and Jan Stuehmer}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nAnEStxyfy} }
We introduce a generative model for protein backbone design utilizing geometric products and higher order message passing. In particular, we propose Clifford Frame Attention (CFA), an extension of the invariant point attention (IPA) architecture from AlphaFold2, in which the backbone residue frames and geometric features are represented in the projective geometric algebra. This enables to construct geometrically expressive messages between residues, including higher order terms, using the bilinear operations of the algebra. We evaluate our architecture by incorporating it into the framework of FrameFlow, a state-of-the-art flow matching model for protein backbone generation. The proposed model achieves high designability, diversity and novelty, while also sampling protein backbones that follow the statistical distribution of secondary structure elements found in naturally occurring proteins, a property so far only insufficiently achieved by many state-of-the-art generative models.
Generating Highly Designable Proteins with Geometric Algebra Flow Matching
[ "Simon Wagner", "Leif Seute", "Vsevolod Viliuga", "Nicolas Wolf", "Frauke Gräter", "Jan Stuehmer" ]
NeurIPS.cc/2024/Conference
2411.05238
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=nAIhvNy15T
@inproceedings{ kynk{\"a}{\"a}nniemi2024applying, title={Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models}, author={Tuomas Kynk{\"a}{\"a}nniemi and Miika Aittala and Tero Karras and Samuli Laine and Timo Aila and Jaakko Lehtinen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nAIhvNy15T} }
Guidance is a crucial technique for extracting the best performance out of image-generating diffusion models. Traditionally, a constant guidance weight has been applied throughout the sampling chain of an image. We show that guidance is clearly harmful toward the beginning of the chain (high noise levels), largely unnecessary toward the end (low noise levels), and only beneficial in the middle. We thus restrict it to a specific range of noise levels, improving both the inference speed and result quality. This limited guidance interval improves the record FID in ImageNet-512 significantly, from 1.81 to 1.40. We show that it is quantitatively and qualitatively beneficial across different sampler parameters, network architectures, and datasets, including the large-scale setting of Stable Diffusion XL. We thus suggest exposing the guidance interval as a hyperparameter in all diffusion models that use guidance.
Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
[ "Tuomas Kynkäänniemi", "Miika Aittala", "Tero Karras", "Samuli Laine", "Timo Aila", "Jaakko Lehtinen" ]
NeurIPS.cc/2024/Conference
2404.07724
[ "https://github.com/kynkaat/guidance-interval" ]
https://huggingface.co/papers/2404.07724
3
12
1
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=nA4Q983a1v
@inproceedings{ morad2024recurrent, title={Recurrent Reinforcement Learning with Memoroids}, author={Steven Morad and Chris Lu and Ryan Kortvelesy and Stephan Liwicki and Jakob Nicolaus Foerster and Amanda Prorok}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=nA4Q983a1v} }
Memory models such as Recurrent Neural Networks (RNNs) and Transformers address Partially Observable Markov Decision Processes (POMDPs) by mapping trajectories to latent Markov states. Neither model scales particularly well to long sequences, especially compared to an emerging class of memory models called Linear Recurrent Models. We discover that the recurrent update of these models resembles a monoid, leading us to reformulate existing models using a novel monoid-based framework that we call memoroids. We revisit the traditional approach to batching in recurrent reinforcement learning, highlighting theoretical and empirical deficiencies. We leverage memoroids to propose a batching method that improves sample efficiency, increases the return, and simplifies the implementation of recurrent loss functions in reinforcement learning.
Recurrent Reinforcement Learning with Memoroids
[ "Steven Morad", "Chris Lu", "Ryan Kortvelesy", "Stephan Liwicki", "Jakob Nicolaus Foerster", "Amanda Prorok" ]
NeurIPS.cc/2024/Conference
2402.09900
[ "https://github.com/proroklab/memory-monoids" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=n9xVaQMJNK
@inproceedings{ zhou2024fewshot, title={Few-Shot Adversarial Prompt Learning on Vision-Language Models}, author={Yiwei Zhou and Xiaobo Xia and Zhiwei Lin and Bo Han and Tongliang Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=n9xVaQMJNK} }
The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention. Inspired by the success of vision-language foundation models, previous efforts achieved zero-shot adversarial robustness by aligning adversarial visual features with text supervision. However, in practice, they are still unsatisfactory due to several issues, including heavy adaptation cost, suboptimal text supervision, and uncontrolled natural generalization capacity. In this paper, to address these issues, we propose a few-shot adversarial prompt framework where adapting input sequences with limited data makes significant adversarial robustness improvement. Specifically, we achieve this by providing adversarially correlated text supervision that is end-to-end learned from adversarial examples. We also propose a novel training objective that enhances the consistency of multi-modal features while encourages differentiated uni-modal features between natural and adversarial examples. The proposed framework gives access to learn adversarial text supervision, which provides superior cross-modal adversarial alignment and matches state-of-the-art zero-shot adversarial robustness with only 1\% training data. Code is available at: https://github.com/lionel-w2/FAP.
Few-Shot Adversarial Prompt Learning on Vision-Language Models
[ "Yiwei Zhou", "Xiaobo Xia", "Zhiwei Lin", "Bo Han", "Tongliang Liu" ]
NeurIPS.cc/2024/Conference
2403.14774
[ "https://github.com/lionel-w2/fap" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=n60xBFZWrk
@inproceedings{ nock2024hyperbolic, title={Hyperbolic Embeddings of Supervised Models}, author={Richard Nock and Ehsan Amid and Frank Nielsen and Alexander Soen and Manfred K Warmuth}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=n60xBFZWrk} }
Models of hyperbolic geometry have been successfully used in ML for two main tasks: embedding *models* in unsupervised learning (*e.g.* hierarchies) and embedding *data*. To our knowledge, there are no approaches that provide embeddings for supervised models; even when hyperbolic geometry provides convenient properties for expressing popular hypothesis classes, such as decision trees (and ensembles). In this paper, we propose a full-fledged solution to the problem in three independent contributions. The first linking the theory of losses for class probability estimation to hyperbolic embeddings in Poincar\'e disk model. The second resolving an issue for a clean, unambiguous embedding of (ensembles of) decision trees in this model. The third showing how to smoothly tweak the Poincar\'e hyperbolic distance to improve its encoding and visualization properties near the border of the disk, a crucial region for our application, while keeping hyperbolicity. This last step has substantial independent interest as it is grounded in a generalization of Leibniz-Newton's fundamental Theorem of calculus.
Hyperbolic Embeddings of Supervised Models
[ "Richard Nock", "Ehsan Amid", "Frank Nielsen", "Alexander Soen", "Manfred K Warmuth" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=n5lLSskwtu
@inproceedings{ yu2024evidential, title={Evidential Mixture Machines: Deciphering Multi-Label Correlations for Active Learning Sensitivity}, author={Dayou Yu and Minghao Li and Weishi Shi and Qi Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=n5lLSskwtu} }
Multi-label active learning is a crucial yet challenging area in contemporary machine learning, often complicated by a large and sparse label space. This challenge is further exacerbated in active learning scenarios where labeling resources are constrained. Drawing inspiration from existing mixture of Bernoulli models, which efficiently compress the label space into a more manageable weight coefficient space by learning correlated Bernoulli components, we propose a novel model called Evidential Mixture Machines (EMM). Our model leverages mixture components derived from unsupervised learning in the label space and improves prediction accuracy by predicting weight coefficients following the evidential learning paradigm. These coefficients are aggregated as proxy pseudo counts to enhance component offset predictions. The evidential learning approach provides an uncertainty-aware connection between input features and the predicted coefficients and components. Additionally, our method combines evidential uncertainty with predicted label embedding covariances for active sample selection, creating a richer, multi-source uncertainty metric beyond traditional uncertainty scores. Experiments on synthetic datasets show the effectiveness of evidential uncertainty prediction and EMM's capability to capture label correlations through predicted components. Further testing on real-world datasets demonstrates improved performance compared to existing multi-label active learning methods.
Evidential Mixture Machines: Deciphering Multi-Label Correlations for Active Learning Sensitivity
[ "Dayou Yu", "Minghao Li", "Weishi Shi", "Qi Yu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=n5R6TvBVcX
@inproceedings{ jiang2024wildteaming, title={WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models}, author={Liwei Jiang and Kavel Rao and Seungju Han and Allyson Ettinger and Faeze Brahman and Sachin Kumar and Niloofar Mireshghallah and Ximing Lu and Maarten Sap and Yejin Choi and Nouha Dziri}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=n5R6TvBVcX} }
We introduce WildTeaming, an automatic red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics, and then composes selections of multiple mined tactics for systematic exploration of novel and even more challenging jailbreaks. Compared to prior work that performed red-teaming via recruited human workers, gradient-based optimization, or iterative revision with large language models (LLMs), our work investigates jailbreaks from chatbot users in-the-wild who were not specifically instructed to break the system. WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in more diverse and successful adversarial attacks compared to state-of-the-art jailbreaking methods. While there exist many datasets for jailbreak evaluation, very few open-source datasets exist for jailbreak training, as safety training data has been closed among all frontier models even when their weights are open. Therefore, with WildTeaming we create WildJailbreak, a large-scale open-source synthetic safety dataset with 262K vanilla (direct request) and adversarial (complex jailbreak) prompt-response pairs. In order to mitigate exaggerated safety behaviors, WildJailbreak provides two contrastive types of queries: 1) harmful queries (both vanilla and adversarial) and 2) benign queries that resemble harmful queries in form but contain no harmful intent. As WildJailbreak considerably upgrades the quality and scale of existing safety resources, it uniquely enables us to examine the scaling effects of data and the interplay of data properties and model capabilities during safety training. Through extensive model training and evaluations, we identify the training properties that enable an ideal balance of safety behaviors: appropriate safeguarding without over-refusal, effective handling of both vanilla and adversarial queries, and minimal, if any, decrease in general capabilities. All the components of WildJailbreak contribute to achieving balanced safety behaviors of models
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
[ "Liwei Jiang", "Kavel Rao", "Seungju Han", "Allyson Ettinger", "Faeze Brahman", "Sachin Kumar", "Niloofar Mireshghallah", "Ximing Lu", "Maarten Sap", "Yejin Choi", "Nouha Dziri" ]
NeurIPS.cc/2024/Conference
2406.18510
[ "https://github.com/allenai/wildteaming" ]
https://huggingface.co/papers/2406.18510
11
8
1
11
[ "allenai/llama2-13b-WildJailbreak", "allenai/llama2-7b-WildJailbreak", "larenspear/copy_of_wildjailbreak", "larenspear/copy_of_wildjailbreak_13", "iknow-lab/llama-3.2-3B-wildguard-ko-2410", "RichardErkhov/iknow-lab_-_llama-3.2-3B-wildguard-ko-2410-gguf" ]
[ "allenai/wildjailbreak", "walledai/WildJailbreak" ]
[]
[ "allenai/llama2-13b-WildJailbreak", "allenai/llama2-7b-WildJailbreak", "larenspear/copy_of_wildjailbreak", "larenspear/copy_of_wildjailbreak_13", "iknow-lab/llama-3.2-3B-wildguard-ko-2410", "RichardErkhov/iknow-lab_-_llama-3.2-3B-wildguard-ko-2410-gguf" ]
[ "allenai/wildjailbreak", "walledai/WildJailbreak" ]
[]
1
poster
null
https://openreview.net/forum?id=n2dvAKKQoM
@inproceedings{ wang2024taskoriented, title={Task-oriented Time Series Imputation Evaluation via Generalized Representers}, author={Zhixian Wang and Linxiao Yang and Liang Sun and Qingsong Wen and Yi Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=n2dvAKKQoM} }
Time series analysis is widely used in many fields such as power energy, economics, and transportation, including different tasks such as forecasting, anomaly detection, classification, etc. Missing values are widely observed in these tasks, and often leading to unpredictable negative effects on existing methods, hindering their further application. In response to this situation, existing time series imputation methods mainly focus on restoring sequences based on their data characteristics, while ignoring the performance of the restored sequences in downstream tasks. Considering different requirements of downstream tasks (e.g., forecasting), this paper proposes an efficient downstream task-oriented time series imputation evaluation approach. By combining time series imputation with neural network models used for downstream tasks, the gain of different imputation strategies on downstream tasks is estimated without retraining, and the most favorable imputation value for downstream tasks is given by combining different imputation strategies according to the estimated gain.
Task-oriented Time Series Imputation Evaluation via Generalized Representers
[ "Zhixian Wang", "Linxiao Yang", "Liang Sun", "Qingsong Wen", "Yi Wang" ]
NeurIPS.cc/2024/Conference
2410.06652
[ "https://github.com/hkuedl/Task-Oriented-Imputation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=n0arS0DDot
@inproceedings{ lee2024blast, title={{BLAST}: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference}, author={Changwoo Lee and Soo Min Kwon and Qing Qu and Hun-Seok Kim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=n0arS0DDot} }
Large-scale foundation models have demonstrated exceptional performance in language and vision tasks. However, the numerous dense matrix-vector operations involved in these large networks pose significant computational challenges during inference. To address these challenges, we introduce the Block-Level Adaptive STructured (BLAST) matrix, designed to learn and leverage efficient structures prevalent in the weight matrices of linear layers within deep learning models. Compared to existing structured matrices, the BLAST matrix offers substantial flexibility, as it can represent various types of structures that are either learned from data or computed from pre-existing weight matrices. We demonstrate the efficiency of using the BLAST matrix for compressing both language and vision tasks, showing that (i) for medium-sized models such as ViT and GPT-2, training with BLAST weights boosts performance while reducing complexity by 70\% and 40\%, respectively; and (ii) for large foundation models such as Llama-7B and DiT-XL, the BLAST matrix achieves a 2x compression while exhibiting the lowest performance degradation among all tested structured matrices. Our code is available at https://github.com/changwoolee/BLAST.
BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference
[ "Changwoo Lee", "Soo Min Kwon", "Qing Qu", "Hun-Seok Kim" ]
NeurIPS.cc/2024/Conference
2410.21262
[ "https://github.com/changwoolee/blast" ]
https://huggingface.co/papers/2410.21262
1
1
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=n01yLUy7Mj
@inproceedings{ sammani2024interpreting, title={Interpreting and Analysing {CLIP}'s Zero-Shot Image Classification via Mutual Knowledge}, author={Fawaz Sammani and Nikos Deligiannis}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=n01yLUy7Mj} }
Contrastive Language-Image Pretraining (CLIP) performs zero-shot image classification by mapping images and textual class representation into a shared embedding space, then retrieving the class closest to the image. This work provides a new approach for interpreting CLIP models for image classification from the lens of mutual knowledge between the two modalities. Specifically, we ask: what concepts do both vision and language CLIP encoders learn in common that influence the joint embedding space, causing points to be closer or further apart? We answer this question via an approach of textual concept-based explanations, showing their effectiveness, and perform an analysis encompassing a pool of 13 CLIP models varying in architecture, size and pretraining datasets. We explore those different aspects in relation to mutual knowledge, and analyze zero-shot predictions. Our approach demonstrates an effective and human-friendly way of understanding zero-shot classification decisions with CLIP.
Interpreting and Analysing CLIP's Zero-Shot Image Classification via Mutual Knowledge
[ "Fawaz Sammani", "Nikos Deligiannis" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mxMvWwyBWe
@inproceedings{ benara2024crafting, title={Crafting Interpretable Embeddings for Language Neuroscience by Asking {LLM}s Questions}, author={Vinamra Benara and Chandan Singh and John Xavier Morris and Richard Antonello and Ion Stoica and Alexander Huth and Jianfeng Gao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=mxMvWwyBWe} }
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks. However, their opaqueness and proliferation into scientific domains such as neuroscience have created a growing need for interpretability. Here, we ask whether we can obtain interpretable embeddings through LLM prompting. We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM. Training QA-Emb reduces to selecting a set of underlying questions rather than learning model weights. We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli. QA-Emb significantly outperforms an established interpretable baseline, and does so while requiring very few questions. This paves the way towards building flexible feature spaces that can concretize and evaluate our understanding of semantic brain representations. We additionally find that QA-Emb can be effectively approximated with an efficient model, and we explore broader applications in simple NLP tasks.
Crafting Interpretable Embeddings for Language Neuroscience by Asking LLMs Questions
[ "Vinamra Benara", "Chandan Singh", "John Xavier Morris", "Richard Antonello", "Ion Stoica", "Alexander Huth", "Jianfeng Gao" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mwN1bbD5DQ
@inproceedings{ tian2024learning, title={Learning De-Biased Representations for Remote-Sensing Imagery}, author={Zichen Tian and Zhaozheng Chen and Qianru Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=mwN1bbD5DQ} }
Remote sensing (RS) imagery, which requires specialized satellites to collect and is difficult to annotate, suffers from data scarcity and class imbalance in certain spectrums. Due to their data scarcity, training large-scale RS models from scratch is unrealistic, and the alternative is to transfer pre-trained models by fine-tuning or a more data-efficient method LoRA. Due to class imbalance, transferred models exhibit strong bias, where features of the major class dominate over those of the minor class. In this paper, we propose debLoRA, a generic training approach that works with any LoRA variants to yield debiased features. It is an unsupervised learning approach that can diversify minor class features based on the shared attributes with major classes, where the attributes are obtained by a simple step of clustering. To evaluate it, we conduct extensive experiments in two transfer learning scenarios in the RS domain: from natural to optical RS images, and from optical RS to multi-spectrum RS images. We perform object classification and oriented object detection tasks on the optical RS dataset DOTA and the SAR dataset FUSRS. Results show that our debLoRA consistently surpasses prior arts across these RS adaptation settings, yielding up to 3.3 and 4.7 percentage points gains on the tail classes for natural $\to$ optical RS and optical RS $\to$ multi-spectrum RS adaptations, respectively, while preserving the performance on head classes, substantiating its efficacy and adaptability
Learning De-Biased Representations for Remote-Sensing Imagery
[ "Zichen Tian", "Zhaozheng Chen", "Qianru Sun" ]
NeurIPS.cc/2024/Conference
2410.04546
[ "https://github.com/doem97/deblora" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=muYhNDlxWc
@inproceedings{ chen2024mgf, title={{MGF}: Mixed Gaussian Flow for Diverse Trajectory Prediction}, author={Jiahe Chen and Jinkun Cao and Dahua Lin and Kris M. Kitani and Jiangmiao Pang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=muYhNDlxWc} }
To predict future trajectories, the normalizing flow with a standard Gaussian prior suffers from weak diversity. The ineffectiveness comes from the conflict between the fact of asymmetric and multi-modal distribution of likely outcomes and symmetric and single-modal original distribution and supervision losses. Instead, we propose constructing a mixed Gaussian prior for a normalizing flow model for trajectory prediction. The prior is constructed by analyzing the trajectory patterns in the training samples without requiring extra annotations while showing better expressiveness and being multi-modal and asymmetric. Besides diversity, it also provides better controllability for probabilistic trajectory generation. We name our method Mixed Gaussian Flow (MGF). It achieves state-of-the-art performance in the evaluation of both trajectory alignment and diversity on the popular UCY/ETH and SDD datasets. Code is available at https://github.com/mulplue/MGF.
MGF: Mixed Gaussian Flow for Diverse Trajectory Prediction
[ "Jiahe Chen", "Jinkun Cao", "Dahua Lin", "Kris M. Kitani", "Jiangmiao Pang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster