bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=G7QS68ICPJ | @inproceedings{
li2024nimbus,
title={Nimbus: Secure and Efficient Two-Party Inference for Transformers},
author={Zhengyi Li and Kang Yang and Jin Tan and Wen-jie Lu and Haoqi Wu and Xiao Wang and Yu Yu and Derun Zhao and Yancheng Zheng and Minyi Guo and Jingwen Leng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G7QS68ICPJ}
} | Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like $\mathsf{GELU}$ and $\mathsf{Softmax}$. This work presents a new two-party inference framework $\mathsf{Nimbus}$ for Transformer models. Specifically, we propose a new 2PC paradigm to securely compute matrix multiplications based on an outer-product insight, which achieves $2.9\times \sim 12.5\times$ performance improvements compared to the state-of-the-art (SOTA) protocol. Furthermore, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for $\mathsf{GELU}$ and $\mathsf{Softmax}$, which improves the performance of the SOTA polynomial approximation by $2.9\times \sim 4.0\times$, where the average accuracy loss of our approach is 0.08\% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, $\mathsf{Nimbus}$ improves the end-to-end performance of $BERT_{base}$ inference by $2.7\times \sim 4.7\times$ across different network settings. | Nimbus: Secure and Efficient Two-Party Inference for Transformers | [
"Zhengyi Li",
"Kang Yang",
"Jin Tan",
"Wen-jie Lu",
"Haoqi Wu",
"Xiao Wang",
"Yu Yu",
"Derun Zhao",
"Yancheng Zheng",
"Minyi Guo",
"Jingwen Leng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G7NZljVOol | @inproceedings{
shin2024ltta,
title={L-{TTA}: Lightweight Test-Time Adaptation Using a Versatile Stem Layer},
author={Jin Shin and Hyun Kim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G7NZljVOol}
} | Test-time adaptation (TTA) is the most realistic methodology for adapting deep learning models to the real world using only unlabeled data from the target domain. Numerous TTA studies in deep learning have aimed at minimizing entropy. However, this necessitates forward/backward processes across the entire model and is limited by the incapability to fully leverage data based solely on entropy. This study presents a groundbreaking TTA solution that involves a departure from the conventional focus on minimizing entropy. Our innovative approach uniquely remodels the stem layer (i.e., the first layer) to emphasize minimizing a new learning criterion, namely, uncertainty. This method requires minimal involvement of the model's backbone, with only the stem layer participating in the TTA process. This approach significantly reduces the memory required for training and enables rapid adaptation to the target domain with minimal parameter updates. Moreover, to maximize data leveraging, the stem layer applies a discrete wavelet transform to the input features. It extracts multi-frequency domains and focuses on minimizing their individual uncertainties. The proposed method integrated into ResNet-26 and ResNet-50 models demonstrates its robustness by achieving outstanding TTA performance while using the least amount of memory compared to existing studies on CIFAR-10-C, ImageNet-C, and Cityscapes-C benchmark datasets. The code is available at https://github.com/janus103/L_TTA. | L-TTA: Lightweight Test-Time Adaptation Using a Versatile Stem Layer | [
"Jin Shin",
"Hyun Kim"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G7L65B2P0y | @inproceedings{
lee2024an,
title={An effective framework for estimating individualized treatment rules},
author={Joowon Lee and Jared Davis Huling and Guanhua Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G7L65B2P0y}
} | Estimating individualized treatment rules (ITRs) is fundamental in causal inference, particularly for precision medicine applications. Traditional ITR estimation methods rely on inverse probability weighting (IPW) to address confounding factors and $L_{1}$-penalization for simplicity and interpretability. However, IPW can introduce statistical bias without precise propensity score modeling, while $L_1$-penalization makes the objective non-smooth, leading to computational bias and requiring subgradient methods. In this paper, we propose a unified ITR estimation framework formulated as a constrained, weighted, and smooth convex optimization problem. The optimal ITR can be robustly and effectively computed by projected gradient descent. Our comprehensive theoretical analysis reveals that weights that balance the spectrum of a `weighted design matrix' improve both the optimization and likelihood landscapes, yielding improved computational and statistical estimation guarantees. In particular, this is achieved by distributional covariate balancing weights, which are model-free alternatives to IPW. Extensive simulations and applications demonstrate that our framework achieves significant gains in both robustness and effectiveness for ITR learning against existing methods. | An effective framework for estimating individualized treatment rules | [
"Joowon Lee",
"Jared Davis Huling",
"Guanhua Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G5lMFOtFHa | @inproceedings{
sadrtdinov2024where,
title={Where Do Large Learning Rates Lead Us?},
author={Ildus Sadrtdinov and Maxim Kodryan and Eduard Pokonechny and Ekaterina Lobacheva and Dmitry Vetrov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G5lMFOtFHa}
} | It is generally accepted that starting neural networks training with large learning rates (LRs) improves generalization. Following a line of research devoted to understanding this effect, we conduct an empirical study in a controlled setting focusing on two questions: 1) how large an initial LR is required for obtaining optimal quality, and 2) what are the key differences between models trained with different LRs? We discover that only a narrow range of initial LRs slightly above the convergence threshold lead to optimal results after fine-tuning with a small LR or weight averaging. By studying the local geometry of reached minima, we observe that using LRs from this optimal range allows for the optimization to locate a basin that only contains high-quality minima. Additionally, we show that these initial LRs result in a sparse set of learned features, with a clear focus on those most relevant for the task. In contrast, starting training with too small LRs leads to unstable minima and attempts to learn all features simultaneously, resulting in poor generalization. Conversely, using initial LRs that are too large fails to detect a basin with good solutions and extract meaningful patterns from the data. | Where Do Large Learning Rates Lead Us? | [
"Ildus Sadrtdinov",
"Maxim Kodryan",
"Eduard Pokonechny",
"Ekaterina Lobacheva",
"Dmitry Vetrov"
] | NeurIPS.cc/2024/Conference | 2410.22113 | [
"https://github.com/isadrtdinov/understanding-large-lrs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G522UpazH3 | @inproceedings{
fan2024transferability,
title={Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness},
author={Mingyuan Fan and Xiaodan Li and Cen Chen and Wenmeng Zhou and Yaliang Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G522UpazH3}
} | A prevailing belief in attack and defense community is that the higher flatness of adversarial examples enables their better cross-model transferability, leading to a growing interest in employing sharpness-aware minimization and its variants. However, the theoretical relationship between the transferability of adversarial examples and their flatness has not been well established, making the belief questionable. To bridge this gap, we embark on a theoretical investigation and, for the first time, derive a theoretical bound for the transferability of adversarial examples with few practical assumptions. Our analysis challenges this belief by demonstrating that the increased flatness of adversarial examples does not necessarily guarantee improved transferability. Moreover, building upon the theoretical analysis, we propose TPA, a Theoretically Provable Attack that optimizes a surrogate of the derived bound to craft adversarial examples. Extensive experiments across widely used benchmark datasets and various real-world applications show that TPA can craft more transferable adversarial examples compared to state-of-the-art baselines. We hope that these results can recalibrate preconceived impressions within the community and facilitate the development of stronger adversarial attack and defense mechanisms. | Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness | [
"Mingyuan Fan",
"Xiaodan Li",
"Cen Chen",
"Wenmeng Zhou",
"Yaliang Li"
] | NeurIPS.cc/2024/Conference | 2311.06423 | [
"https://github.com/fmy266/tpa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G4vFNmraxj | @inproceedings{
gubina2024hybrid,
title={Hybrid Generative {AI} for De Novo Design of Co-Crystals with Enhanced Tabletability},
author={Nina Gubina and Andrei Dmitrenko and Gleb Vitalevich Solovev and Lyubov Yamshchikova and Oleg Petrov and Ivan Lebedev and Nikita Serov and Grigorii Kirgizov and Nikolay Nikitin and Vladimir Vinogradov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G4vFNmraxj}
} | Co-crystallization is an accessible way to control physicochemical characteristics of organic crystals, which finds many biomedical applications. In this work, we present Generative Method for Co-crystal Design (GEMCODE), a novel pipeline for automated co-crystal screening based on the hybridization of deep generative models and evolutionary optimization for broader exploration of the target chemical space. GEMCODE enables fast *de novo* co-crystal design with target tabletability profiles, which is crucial for the development of pharmaceuticals. With a series of experimental studies highlighting validation and discovery cases, we show that GEMCODE is effective even under realistic computational constraints. Furthermore, we explore the potential of language models in generating co-crystals. Finally, we present numerous previously unknown co-crystals predicted by GEMCODE and discuss its potential in accelerating drug development. | Hybrid Generative AI for De Novo Design of Co-Crystals with Enhanced Tabletability | [
"Nina Gubina",
"Andrei Dmitrenko",
"Gleb Vitalevich Solovev",
"Lyubov Yamshchikova",
"Oleg Petrov",
"Ivan Lebedev",
"Nikita Serov",
"Grigorii Kirgizov",
"Nikolay Nikitin",
"Vladimir Vinogradov"
] | NeurIPS.cc/2024/Conference | 2410.17005 | [
"https://github.com/ai-chem/gemcode"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G2dYZJO4BE | @inproceedings{
kostin2024achievable,
title={Achievable distributional robustness when the robust risk is only partially identified},
author={Julia Kostin and Nicola Gnecco and Fanny Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G2dYZJO4BE}
} | In safety-critical applications, machine learning models should generalize well under worst-case distribution shifts, that is, have a small robust risk. Invariance-based algorithms can provably take advantage of structural assumptions on the shifts when the training distributions are heterogeneous enough to identify the robust risk. However, in practice, such identifiability conditions are rarely satisfied – a scenario so far underexplored in the theoretical literature. In this paper, we aim to fill the gap and propose to study the more general setting of partially identifiable robustness. In particular, we define a new risk measure, the identifiable robust risk, and its corresponding (population) minimax quantity that is an algorithm-independent measure for the best achievable robustness under partial identifiability. We introduce these concepts broadly, and then study them within the framework of linear structural causal models for concreteness of the presentation. We use the introduced minimax quantity to show how previous approaches provably achieve suboptimal robustness in the partially identifiable case. We confirm our findings through empirical simulations and real-world experiments and demonstrate how the test error of existing robustness methods grows increasingly suboptimal as the proportion of previously unseen test directions increases. | Achievable distributional robustness when the robust risk is only partially identified | [
"Julia Kostin",
"Nicola Gnecco",
"Fanny Yang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G24fOpC3JE | @inproceedings{
cai2024continuous,
title={Continuous Temporal Domain Generalization},
author={Zekun Cai and Guangji Bai and Renhe Jiang and Xuan Song and Liang Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G24fOpC3JE}
} | Temporal Domain Generalization (TDG) addresses the challenge of training predictive models under temporally varying data distributions. Traditional TDG approaches typically focus on domain data collected at fixed, discrete time intervals, which limits their capability to capture the inherent dynamics within continuous-evolving and irregularly-observed temporal domains. To overcome this, this work formalizes the concept of Continuous Temporal Domain Generalization (CTDG), where domain data are derived from continuous times and are collected at arbitrary times. CTDG tackles critical challenges including: 1) Characterizing the continuous dynamics of both data and models, 2) Learning complex high-dimensional nonlinear dynamics, and 3) Optimizing and controlling the generalization across continuous temporal domains. To address them, we propose a Koopman operator-driven continuous temporal domain generalization (Koodos) framework. We formulate the problem within a continuous dynamic system and leverage the Koopman theory to learn the underlying dynamics; the framework is further enhanced with a comprehensive optimization strategy equipped with analysis and control driven by prior knowledge of the dynamics patterns. Extensive experiments demonstrate the effectiveness and efficiency of our approach. The code can be found at: https://github.com/Zekun-Cai/Koodos. | Continuous Temporal Domain Generalization | [
"Zekun Cai",
"Guangji Bai",
"Renhe Jiang",
"Xuan Song",
"Liang Zhao"
] | NeurIPS.cc/2024/Conference | 2405.16075 | [
"https://github.com/zekun-cai/koodos"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G0yxFmP87g | @inproceedings{
fu2024amoeballm,
title={Amoeba{LLM}: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment},
author={Yonggan Fu and Zhongzhi Yu and Junwei Li and Jiayi Qian and Yongan Zhang and Xiangchi Yuan and Dachuan Shi and Roman Yakunin and Yingyan Celine Lin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G0yxFmP87g}
} | Motivated by the transformative capabilities of large language models (LLMs) across various natural language tasks, there has been a growing demand to deploy these models effectively across diverse real-world applications and platforms. However, the challenge of efficiently deploying LLMs has become increasingly pronounced due to the varying application-specific performance requirements and the rapid evolution of computational platforms, which feature diverse resource constraints and deployment flows. These varying requirements necessitate LLMs that can adapt their structures (depth and width) for optimal efficiency across different platforms and application specifications. To address this critical gap, we propose AmoebaLLM, a novel framework designed to enable the instant derivation of LLM subnets of arbitrary shapes, which achieve the accuracy-efficiency frontier and can be extracted immediately after a one-time fine-tuning. In this way, AmoebaLLM significantly facilitates rapid deployment tailored to various platforms and applications. Specifically, AmoebaLLM integrates three innovative components: (1) a knowledge-preserving subnet selection strategy that features a dynamic-programming approach for depth shrinking and an importance-driven method for width shrinking; (2) a shape-aware mixture of LoRAs to mitigate gradient conflicts among subnets during fine-tuning; and (3) an in-place distillation scheme with loss-magnitude balancing as the fine-tuning objective. Extensive experiments validate that AmoebaLLM not only sets new standards in LLM adaptability but also successfully delivers subnets that achieve state-of-the-art trade-offs between accuracy and efficiency. | AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment | [
"Yonggan Fu",
"Zhongzhi Yu",
"Junwei Li",
"Jiayi Qian",
"Yongan Zhang",
"Xiangchi Yuan",
"Dachuan Shi",
"Roman Yakunin",
"Yingyan Celine Lin"
] | NeurIPS.cc/2024/Conference | 2411.10606 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=G0v0TxX01N | @inproceedings{
ye2024diffusion,
title={Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models},
author={Jiacheng Ye and Shansan Gong and Liheng Chen and Lin Zheng and Jiahui Gao and Han Shi and Chuan Wu and Xin Jiang and Zhenguo Li and Wei Bi and Lingpeng Kong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G0v0TxX01N}
} | Recently, diffusion models have garnered significant interest in the field of text processing due to their many potential advantages compared to conventional autoregressive models.
In this work, we propose Diffusion-of-Thought (DoT), a novel approach that integrates diffusion models with Chain-of-Thought, a well-established technique for improving the reasoning ability of autoregressive language models. In contrast to autoregressive language models that make decisions in a left-to-right, token-by-token manner, DoT allows reasoning steps to diffuse over time through a diffusion language model and offers greater flexibility in trading-off computation for reasoning performance. Our experimental results demonstrate the effectiveness of DoT in multi-digit multiplication, boolean logic, and grade school math problems. In addition to that, DoT showcases promising self-correction abilities and benefits from existing reasoning-enhancing techniques like self-consistency decoding. Our findings contribute to the understanding and development of reasoning with diffusion language models. | Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models | [
"Jiacheng Ye",
"Shansan Gong",
"Liheng Chen",
"Lin Zheng",
"Jiahui Gao",
"Han Shi",
"Chuan Wu",
"Xin Jiang",
"Zhenguo Li",
"Wei Bi",
"Lingpeng Kong"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G0LfcMiRkc | @inproceedings{
wu2024linguistic,
title={Linguistic Collapse: Neural Collapse in (Large) Language Models},
author={Robert Wu and Vardan Papyan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=G0LfcMiRkc}
} | Neural collapse ($\mathcal{NC}$) is a phenomenon observed in classification tasks where top-layer representations collapse into their class means, which become equinorm, equiangular and aligned with the classifiers.
These behaviors -- associated with generalization and robustness -- would manifest under specific conditions: models are trained towards zero loss, with noise-free labels belonging to balanced classes, which do not outnumber the model's hidden dimension.
Recent studies have explored $\mathcal{NC}$ in the absence of one or more of these conditions to extend and capitalize on the associated benefits of ideal geometries.
Language modeling presents a curious frontier, as \textit{training by token prediction} constitutes a classification task where none of the conditions exist: the vocabulary is imbalanced and exceeds the embedding dimension; different tokens might correspond to similar contextual embeddings; and large language models (LLMs) in particular are typically only trained for a few epochs.
This paper empirically investigates the impact of scaling the architectures and training of causal language models (CLMs) on their progression towards $\mathcal{NC}$.
We find that $\mathcal{NC}$ properties that develop with scale (and regularization) are linked to generalization.
Moreover, there is evidence of some relationship between $\mathcal{NC}$ and generalization independent of scale.
Our work thereby underscores the generality of $\mathcal{NC}$ as it extends to the novel and more challenging setting of language modeling.
Downstream, we seek to inspire further research on the phenomenon to deepen our understanding of LLMs -- and neural networks at large -- and improve existing architectures based on $\mathcal{NC}$-related properties.
Our code is hosted on GitHub: [`https://github.com/rhubarbwu/linguistic-collapse`](https://github.com/rhubarbwu/linguistic-collapse). | Linguistic Collapse: Neural Collapse in (Large) Language Models | [
"Robert Wu",
"Vardan Papyan"
] | NeurIPS.cc/2024/Conference | 2405.17767 | [
"https://github.com/rhubarbwu/linguistic-collapse"
] | https://huggingface.co/papers/2405.17767 | 2 | 1 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FzwAQJK4CG | @inproceedings{
zheng2024learning,
title={Learning Plaintext-Ciphertext Cryptographic Problems via {ANF}-based {SAT} Instance Representation},
author={Xinhao Zheng and Yang Li and Cunxin Fan and Huaijin Wu and Xinhao Song and Junchi Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FzwAQJK4CG}
} | Cryptographic problems, operating within binary variable spaces, can be routinely transformed into Boolean Satisfiability (SAT) problems regarding specific cryptographic conditions like plaintext-ciphertext matching. With the fast development of learning for discrete data, this SAT representation also facilitates the utilization of machine-learning approaches with the hope of automatically capturing patterns and strategies inherent in cryptographic structures in a data-driven manner. Existing neural SAT solvers consistently adopt conjunctive normal form (CNF) for instance representation, which in the cryptographic context can lead to scale explosion and a loss of high-level semantics. In particular, extensively used XOR operations in cryptographic problems can incur an exponential number of clauses. In this paper, we propose a graph structure based on Arithmetic Normal Form (ANF) to efficiently handle the XOR operation bottleneck. Additionally, we design an encoding method for AND operations in these ANF-based graphs, demonstrating improved efficiency over alternative general graph forms for SAT. We then propose CryptoANFNet, a graph learning approach that trains a classifier based on a message-passing scheme to predict plaintext-ciphertext satisfiability.
Using ANF-based SAT instances, CryptoANFNet demonstrates superior scalability and can naturally capture higher-order operational information. Empirically, CryptoANFNet achieves a 50x speedup over heuristic solvers and outperforms SOTA learning-based SAT solver NeuroSAT, with 96\% vs. 91\% accuracy on small-scale and 72\% vs. 55\% on large-scale datasets from real encryption algorithms. We also introduce a key-solving algorithm that simplifies ANF-based SAT instances from plaintext and ciphertext, enhancing key decryption accuracy from 76.5\% to 82\% and from 72\% to 75\% for datasets generated from two real encryption algorithms. | Learning Plaintext-Ciphertext Cryptographic Problems via ANF-based SAT Instance Representation | [
"Xinhao Zheng",
"Yang Li",
"Cunxin Fan",
"Huaijin Wu",
"Xinhao Song",
"Junchi Yan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FwxOHl0BEl | @inproceedings{
dehmamy2024neural,
title={Neural Network Reparametrization for Accelerated Optimization in Molecular Simulations},
author={Nima Dehmamy and Csaba Both and Jeet Mohapatra and Subhro Das and Tommi Jaakkola},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FwxOHl0BEl}
} | We propose a novel approach to molecular simulations using neural network reparametrization, which offers a flexible alternative to traditional coarse-graining methods.
Unlike conventional techniques that strictly reduce degrees of freedom, the complexity of the system can be adjusted in our model, sometimes increasing it to simplify the optimization process.
Our approach also maintains continuous access to fine-grained modes and eliminates the need for force-matching, enhancing both the efficiency and accuracy of energy minimization.
Importantly, our framework allows for the use of potentially arbitrary neural networks (e.g., Graph Neural Networks (GNN)) to perform the reparametrization, incorporating CG modes as needed.
In fact, our experiments using very weak molecular forces (Lennard-Jones potential) the GNN-based model is the sole model to find the correct configuration.
Similarly, in protein-folding scenarios, our GNN-based CG method consistently outperforms traditional optimization methods.
It not only recovers the target structures more accurately but also achieves faster convergence to the deepest energy states.
This work demonstrates significant advancements in molecular simulations by optimizing energy minimization and convergence speeds, offering a new, efficient framework for simulating complex molecular systems. | Neural Network Reparametrization for Accelerated Optimization in Molecular Simulations | [
"Nima Dehmamy",
"Csaba Both",
"Jeet Mohapatra",
"Subhro Das",
"Tommi Jaakkola"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FwhM1Zpyft | @inproceedings{
zhou2024scalable,
title={Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes},
author={Duo Zhou and Christopher Brix and Grani A. Hanasusanto and Huan Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FwhM1Zpyft}
} | Recently, cutting-plane methods such as GCP-CROWN have been explored to enhance neural network verifiers and made significant advancements. However, GCP-CROWN currently relies on ${\it generic}$ cutting planes ("cuts") generated from external mixed integer programming (MIP) solvers. Due to the poor scalability of MIP solvers, large neural networks cannot benefit from these cutting planes. In this paper, we exploit the structure of the neural network verification problem to generate efficient and scalable cutting planes ${\it specific}$ to this problem setting. We propose a novel approach, Branch-and-bound Inferred Cuts with COnstraint Strengthening (BICCOS), that leverages the logical relationships of neurons within verified subproblems in the branch-and-bound search tree, and we introduce cuts that preclude these relationships in other subproblems. We develop a mechanism that assigns influence scores to neurons in each path to allow the strengthening of these cuts. Furthermore, we design a multi-tree search technique to identify more cuts, effectively narrowing the search space and accelerating the BaB algorithm. Our results demonstrate that BICCOS can generate hundreds of useful cuts during the branch-and-bound process and consistently increase the number of verifiable instances compared to other state-of-the-art neural network verifiers on a wide range of benchmarks, including large networks that previous cutting plane methods could not scale to. | Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes | [
"Duo Zhou",
"Christopher Brix",
"Grani A. Hanasusanto",
"Huan Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FuTfZK7PK3 | @inproceedings{
li2024the,
title={The Power of Extrapolation in Federated Learning},
author={Hanmin Li and Kirill Acharya and Peter Richt{\'a}rik},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FuTfZK7PK3}
} | We propose and study several server-extrapolation strategies for enhancing the theoretical and empirical convergence properties of the popular federated learning optimizer FedProx [Li et al., 2020]. While it has long been known that some form of extrapolation can help in the practice of FL, only a handful of works provide any theoretical guarantees. The phenomenon seems elusive, and our current theoretical understanding remains severely incomplete. In our work, we focus on smooth convex or strongly convex problems in the interpolation regime. In particular, we propose Extrapolated FedProx (FedExProx), and study three extrapolation strategies: a constant strategy (depending on various smoothness parameters and the number of participating devices), and two smoothness-adaptive strategies; one based on the notion of gradient diversity (FedExProx-GraDS), and the other one based on the stochastic Polyak stepsize (FedExProx-StoPS). Our theory is corroborated with carefully constructed numerical experiments. | The Power of Extrapolation in Federated Learning | [
"Hanmin Li",
"Kirill Acharya",
"Peter Richtárik"
] | NeurIPS.cc/2024/Conference | 2405.13766 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FtzLbGoHW2 | @inproceedings{
ye2024improving,
title={Improving Gloss-free Sign Language Translation by Reducing Representation Density},
author={Jinhui Ye and Xing Wang and Wenxiang Jiao and Junwei Liang and Hui Xiong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FtzLbGoHW2}
} | Gloss-free sign language translation (SLT) aims to develop well-performing SLT systems with no requirement for the costly gloss annotations, but currently still lags behind gloss-based approaches significantly. In this paper, we identify **a representation density problem** that could be a bottleneck in restricting the performance of gloss-free SLT. Specifically, the representation density problem describes that the visual representations of semantically distinct sign gestures tend to be closely packed together in feature space, which makes gloss-free methods struggle with distinguishing different sign gestures and suffer from a sharp performance drop. To address the representation density problem, we introduce a simple but effective contrastive learning strategy, namely SignCL, which encourages gloss-free models to learn more discriminative feature representation in a self-supervised manner. Our experiments demonstrate that the proposed SignCL can significantly reduce the representation density and improve performance across various translation frameworks. Specifically, SignCLachieves a significant improvement in BLEU score for the Sign Language Transformer and GFSLT-VLP on the CSL-Daily dataset by 39\% and 46\%, respectively, without any increase of model parameters. Compared to Sign2GPT, a state-of-the-art method based on large-scale pre-trained vision and language models, SignCLachieves better performance with only 35\% of its parameters. We will release our code and model to facilitate further research. | Improving Gloss-free Sign Language Translation by Reducing Representation Density | [
"Jinhui Ye",
"Xing Wang",
"Wenxiang Jiao",
"Junwei Liang",
"Hui Xiong"
] | NeurIPS.cc/2024/Conference | 2405.14312 | [
"https://github.com/jinhuiye/signcl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FsdB3I9Y24 | @inproceedings{
christopher2024constrained,
title={Constrained Synthesis with Projected Diffusion Models},
author={Jacob K Christopher and Stephen Baek and Ferdinando Fioretto},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FsdB3I9Y24}
} | This paper introduces an approach to endow generative diffusion processes the ability to satisfy and certify compliance with constraints and physical principles. The proposed method recast the traditional sampling process of generative diffusion models as a constrained optimization problem, steering the generated data distribution to remain within a specified region to ensure adherence to the given constraints.
These capabilities are validated on applications featuring both convex and challenging, non-convex, constraints as well as ordinary differential equations, in domains spanning from synthesizing new materials with precise morphometric properties, generating physics-informed motion, optimizing paths in planning scenarios, and human motion synthesis. | Constrained Synthesis with Projected Diffusion Models | [
"Jacob K Christopher",
"Stephen Baek",
"Ferdinando Fioretto"
] | NeurIPS.cc/2024/Conference | 2402.03559 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FsA0OSsdzJ | @inproceedings{
yu2024structured,
title={Structured Learning of Compositional Sequential Interventions},
author={Jialin Yu and Andreas Koukorinis and Nicol{\`o} Colombo and Yuchen Zhu and Ricardo Silva},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FsA0OSsdzJ}
} | We consider sequential treatment regimes where each unit is exposed to combinations of interventions over time. When interventions are described by qualitative labels, such as "close schools for a month due to a pandemic" or "promote this podcast to this user during this week", it is unclear which appropriate structural assumptions allow us to generalize behavioral predictions to previously unseen combinations of interventions. Standard black-box approaches mapping sequences of categorical variables to outputs are applicable, but they rely on poorly understood assumptions on how reliable generalization can be obtained, and may underperform under sparse sequences, temporal variability, and large action spaces. To approach that, we pose an explicit model for composition, that is, how the effect of sequential interventions can be isolated into modules, clarifying which data conditions allow for the identification of their combined effect at different units and time steps. We show the identification properties of our compositional model, inspired by advances in causal matrix factorization methods. Our focus is on predictive models for novel compositions of interventions instead of matrix completion tasks and causal effect estimation. We compare our approach to flexible but generic black-box models to illustrate how structure aids prediction in sparse data conditions. | Structured Learning of Compositional Sequential Interventions | [
"Jialin Yu",
"Andreas Koukorinis",
"Nicolò Colombo",
"Yuchen Zhu",
"Ricardo Silva"
] | NeurIPS.cc/2024/Conference | 2406.05745 | [
"https://github.com/jialin-yu/csi-vae"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Fr9d1UMc37 | @inproceedings{
maini2024llm,
title={{LLM} Dataset Inference: Did you train on my dataset?},
author={Pratyush Maini and Hengrui Jia and Nicolas Papernot and Adam Dziedzic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Fr9d1UMc37}
} | The proliferation of large language models (LLMs) in the real world has come with a rise in copyright cases against companies for training their models on unlicensed data from the internet. Recent works have presented methods to identify if individual text sequences were members of the model's training data, known as membership inference attacks (MIAs).
We demonstrate that the apparent success of these MIAs is confounded by selecting non-members (text sequences not used for training) belonging to a different distribution from the members (e.g., temporally shifted recent Wikipedia articles compared with ones used to train the model). This distribution shift makes membership inference appear successful.
However, most MIA methods perform no better than random guessing when discriminating between members and non-members from the same distribution (e.g., in this case, the same period of time).
Even when MIAs work, we find that different MIAs succeed at inferring membership of samples from different distributions.
Instead, we propose a new dataset inference method to accurately identify the datasets used to train large language models. This paradigm sits realistically in the modern-day copyright landscape, where authors claim that an LLM is trained over multiple documents (such as a book) written by them, rather than one particular paragraph.
While dataset inference shares many of the challenges of membership inference, we solve it by selectively combining the MIAs that provide positive signal for a given distribution, and aggregating them to perform a statistical test on a given dataset. Our approach successfully distinguishes the train and test sets of different subsets of the Pile with statistically significant p-values $< 0.1$, without any false positives. | LLM Dataset Inference: Did you train on my dataset? | [
"Pratyush Maini",
"Hengrui Jia",
"Nicolas Papernot",
"Adam Dziedzic"
] | NeurIPS.cc/2024/Conference | 2406.06443 | [
"https://github.com/pratyushmaini/llm_dataset_inference"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FqWyzyErVT | @inproceedings{
wu2024federated,
title={Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data},
author={Zhaomin Wu and Junyi Hou and Yiqun Diao and Bingsheng He},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FqWyzyErVT}
} | Federated Learning (FL) is an evolving paradigm that enables multiple parties to collaboratively train models without sharing raw data. Among its variants, Vertical Federated Learning (VFL) is particularly relevant in real-world, cross-organizational collaborations, where distinct features of a shared instance group are contributed by different parties. In these scenarios, parties are often linked using fuzzy identifiers, leading to a common practice termed as _multi-party fuzzy VFL_. Existing models generally address either multi-party VFL or fuzzy VFL between two parties. Extending these models to practical multi-party fuzzy VFL typically results in significant performance degradation and increased costs for maintaining privacy. To overcome these limitations, we introduce the _Federated Transformer (FeT)_, a novel framework that supports multi-party VFL with fuzzy identifiers. FeT innovatively encodes these identifiers into data representations and employs a transformer architecture distributed across different parties, incorporating three new techniques to enhance performance. Furthermore, we have developed a multi-party privacy framework for VFL that integrates differential privacy with secure multi-party computation, effectively protecting local representations while minimizing associated utility costs. Our experiments demonstrate that the FeT surpasses the baseline models by up to 46\% in terms of accuracy when scaled to 50 parties. Additionally, in two-party fuzzy VFL settings, FeT also shows improved performance and privacy over cutting-edge VFL models. | Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data | [
"Zhaomin Wu",
"Junyi Hou",
"Yiqun Diao",
"Bingsheng He"
] | NeurIPS.cc/2024/Conference | 2410.17986 | [
"https://github.com/xtra-computing/fet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Fp3JVz5XE7 | @inproceedings{
paranjape2024federated,
title={Federated Black-Box Adaptation for Semantic Segmentation},
author={Jay Nitin Paranjape and Shameema Sikder and S. Swaroop Vedula and Vishal M. Patel},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Fp3JVz5XE7}
} | Federated Learning (FL) is a form of distributed learning that allows multiple institutions or clients to collaboratively learn a global model to solve a task. This allows the model to utilize the information from every institute while preserving data privacy. However, recent studies show that the promise of protecting the privacy of data is not upheld by existing methods and that it is possible to recreate the training data from the different institutions. This is done by utilizing gradients transferred between the clients and the global server during training or by knowing the model architecture at the client end. In this paper, we propose a federated learning framework for semantic segmentation without knowing the model architecture nor transferring gradients between the client and the server, thus enabling better privacy preservation. We propose \textit{BlackFed} - a black-box adaptation of neural networks that utilizes zero order optimization (ZOO) to update the client model weights and first order optimization (FOO) to update the server weights. We evaluate our approach on several computer vision and medical imaging datasets to demonstrate its effectiveness. To the best of our knowledge, this work is one of the first works in employing federated learning for segmentation, devoid of gradients or model information exchange. Code: https://github.com/JayParanjape/blackfed/tree/master | Federated Black-Box Adaptation for Semantic Segmentation | [
"Jay Nitin Paranjape",
"Shameema Sikder",
"S. Swaroop Vedula",
"Vishal M. Patel"
] | NeurIPS.cc/2024/Conference | 2410.24181 | [
"https://github.com/JayParanjape/blackfed"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FoGwiFXzuN | @inproceedings{
abbe2024how,
title={How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad},
author={Emmanuel Abbe and Samy Bengio and Aryo Lotfi and Colin Sandon and Omid Saremi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FoGwiFXzuN}
} | Can Transformers predict new syllogisms by composing established ones? More generally, what type of targets can be learned by such models from scratch? Recent works show that Transformers can be Turing-complete in terms of expressivity, but this does not address the learnability objective. This paper puts forward the notion of 'globality degree' of a target distribution to capture when weak learning is efficiently achievable by regular Transformers. This measure shows a contrast with the expressivity results of Transformers captured by $TC^0/TC^1$ classes (further studied here), since the globality relates to correlations with the more limited $NC^0$ class. We show here experimentally and theoretically under additional assumptions that distributions with high globality cannot be learned efficiently. In particular, syllogisms cannot be composed on long chains. Further, we develop scratchpad techniques and show that: (i) agnostic scratchpads cannot break the globality barrier, (ii) educated scratchpads can break the globality with intermediate steps, although not all such scratchpads can generalize out-of-distribution (OOD), (iii) a notion of 'inductive scratchpad', that composes the prior information more efficiently, can both break the globality barrier and improve the OOD generalization. In particular, some of our inductive scratchpads can achieve length generalizations of up to $6\times$ for some arithmetic tasks depending on the input formatting. | How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad | [
"Emmanuel Abbe",
"Samy Bengio",
"Aryo Lotfi",
"Colin Sandon",
"Omid Saremi"
] | NeurIPS.cc/2024/Conference | 2406.06467 | [
"https://github.com/aryol/inductive-scratchpad"
] | https://huggingface.co/papers/2406.06467 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FmNoFIImZG | @inproceedings{
margeloiu2024tabebm,
title={Tab{EBM}: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models},
author={Andrei Margeloiu and Xiangjian Jiang and Nikola Simidjievski and Mateja Jamnik},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FmNoFIImZG}
} | Data collection is often difficult in critical fields such as medicine, physics, and chemistry, yielding typically only small tabular datasets. However, classification methods tend to struggle with these small datasets, leading to poor predictive performance. Increasing the training set with additional synthetic data, similar to data augmentation in images, is commonly believed to improve downstream tabular classification performance. However, current tabular generative methods that learn either the joint distribution $ p(\mathbf{x}, y) $ or the class-conditional distribution $ p(\mathbf{x} \mid y) $ often overfit on small datasets, resulting in poor-quality synthetic data, usually worsening classification performance compared to using real data alone. To solve these challenges, we introduce TabEBM, a novel class-conditional generative method using Energy-Based Models (EBMs). Unlike existing tabular methods that use a shared model to approximate all class-conditional densities, our key innovation is to create distinct EBM generative models for each class, each modelling its class-specific data distribution individually. This approach creates robust energy landscapes, even in ambiguous class distributions. Our experiments show that TabEBM generates synthetic data with higher quality and better statistical fidelity than existing methods. When used for data augmentation, our synthetic data consistently leads to improved classification performance across diverse datasets of various sizes, especially small ones. Code is available at https://github.com/andreimargeloiu/TabEBM. | TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models | [
"Andrei Margeloiu",
"Xiangjian Jiang",
"Nikola Simidjievski",
"Mateja Jamnik"
] | NeurIPS.cc/2024/Conference | 2409.16118 | [
"https://github.com/andreimargeloiu/TabEBM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FlcdW7NPRY | @inproceedings{
halawi2024approaching,
title={Approaching Human-Level Forecasting with Language Models},
author={Danny Halawi and Fred Zhang and Chen Yueh-Han and Jacob Steinhardt},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FlcdW7NPRY}
} | Forecasting future events is important for policy and decision making. In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters. Towards this goal, we develop a retrieval-augmented LM system designed to automatically search for relevant information, generate forecasts, and aggregate predictions. To facilitate our study, we collect a large dataset of questions from competitive forecasting platforms. Under a test set published after the knowledge cut-offs of our LMs, we evaluate the end-to-end performance of our system against the aggregates of human forecasts. On average, the system nears the crowd aggregate of competitive forecasters and, in a certain relaxed setting, surpasses it. Our work suggests that using LMs to forecasts the future could provide accurate predictions at scale and help to inform institutional decision making. | Approaching Human-Level Forecasting with Language Models | [
"Danny Halawi",
"Fred Zhang",
"Chen Yueh-Han",
"Jacob Steinhardt"
] | NeurIPS.cc/2024/Conference | 2402.18563 | [
""
] | https://huggingface.co/papers/2402.18563 | 1 | 1 | 0 | 4 | [] | [
"YuehHanChen/forecasting_raw",
"YuehHanChen/forecasting"
] | [] | [] | [
"YuehHanChen/forecasting_raw",
"YuehHanChen/forecasting"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=FjssnGuHih | @inproceedings{
li2024uniar,
title={Uni{AR}: A Unified model for predicting human Attention and Responses on visual content},
author={Peizhao Li and Junfeng He and Gang Li and Rachit Bhargava and Shaolei Shen and NACHIAPPAN VALLIAPPAN and Youwei Liang and Hongxiang Gu and Venky Ramachandran and Golnaz farhadi and Yang Li and Kai J Kohlhoff and Vidhya Navalpakkam},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FjssnGuHih}
} | Progress in human behavior modeling involves understanding both implicit, early-stage perceptual behavior, such as human attention, and explicit, later-stage behavior, such as subjective preferences or likes. Yet most prior research has focused on modeling implicit and explicit human behavior in isolation; and often limited to a specific type of visual content. We propose UniAR -- a unified model of human attention and preference behavior across diverse visual content. UniAR leverages a multimodal transformer to predict subjective feedback, such as satisfaction or aesthetic quality, along with the underlying human attention or interaction heatmaps and viewing order. We train UniAR on diverse public datasets spanning natural images, webpages, and graphic designs, and achieve SOTA performance on multiple benchmarks across various image domains and behavior modeling tasks. Potential applications include providing instant feedback on the effectiveness of UIs/visual content, and enabling designers and content-creation models to optimize their creation for human-centric improvements. | UniAR: A Unified model for predicting human Attention and Responses on visual content | [
"Peizhao Li",
"Junfeng He",
"Gang Li",
"Rachit Bhargava",
"Shaolei Shen",
"NACHIAPPAN VALLIAPPAN",
"Youwei Liang",
"Hongxiang Gu",
"Venky Ramachandran",
"Golnaz farhadi",
"Yang Li",
"Kai J Kohlhoff",
"Vidhya Navalpakkam"
] | NeurIPS.cc/2024/Conference | 2312.10175 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FisyQfoJCm | @inproceedings{
yuan2024mogents,
title={MoGen{TS}: Motion Generation based on Spatial-Temporal Joint Modeling},
author={Weihao Yuan and Yisheng HE and Weichao Shen and Yuan Dong and Xiaodong Gu and Zilong Dong and Liefeng Bo and Qixing Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FisyQfoJCm}
} | Motion generation from discrete quantization offers many advantages over continuous regression, but at the cost of inevitable approximation errors. Previous methods usually quantize the entire body pose into one code, which not only faces the difficulty in encoding all joints within one vector but also loses the spatial relationship between different joints. Differently, in this work we quantize each individual joint into one vector, which i) simplifies the quantization process as the complexity associated with a single joint is markedly lower than that of the entire pose; ii) maintains a spatial-temporal structure that preserves both the spatial relationships among joints and the temporal movement patterns; iii) yields a 2D token map, which enables the application of various 2D operations widely used in 2D images. Grounded in the 2D motion quantization, we build a spatial-temporal modeling framework, where 2D joint VQVAE, temporal-spatial 2D masking technique, and spatial-temporal 2D attention are proposed to take advantage of spatial-temporal signals among the 2D tokens. Extensive experiments demonstrate that our method significantly outperforms previous methods across different datasets, with a $26.6\%$ decrease of FID on HumanML3D and a $29.9\%$ decrease on KIT-ML. | MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling | [
"Weihao Yuan",
"Yisheng HE",
"Weichao Shen",
"Yuan Dong",
"Xiaodong Gu",
"Zilong Dong",
"Liefeng Bo",
"Qixing Huang"
] | NeurIPS.cc/2024/Conference | 2409.17686 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Ffb30OVVCa | @inproceedings{
tragakis2024is,
title={Is One {GPU} Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.},
author={Athanasios Tragakis and Marco Aversa and Chaitanya Kaul and Roderick Murray-Smith and Daniele Faccio},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Ffb30OVVCa}
} | In this work, we introduce Pixelsmith, a zero-shot text-to-image generative framework to sample images at higher resolutions with a single GPU. We are the first to show that it is possible to scale the output of a pre-trained diffusion model by a factor of 1000, opening the road to gigapixel image generation at no extra cost. Our cascading method uses the image generated at the lowest resolution as baseline to sample at higher resolutions. For the guidance, we introduce the Slider, a mechanism that fuses the overall structure contained in the first-generated image with enhanced fine details. At each inference step, we denoise patches rather than the entire latent space, minimizing memory demands so that a single GPU can handle the process, regardless of the image's resolution. Our experimental results show that this method not only achieves higher quality and diversity compared to existing techniques but also reduces sampling time and ablation artifacts. | Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models. | [
"Athanasios Tragakis",
"Marco Aversa",
"Chaitanya Kaul",
"Roderick Murray-Smith",
"Daniele Faccio"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/thanos-db/pixelsmith"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FfFcDNDNol | @inproceedings{
chen2024when,
title={When {LLM} Meets {DRL}: Advancing Jailbreaking Efficiency via {DRL}-guided Search},
author={Xuan Chen and Yuzhou Nie and Wenbo Guo and Xiangyu Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FfFcDNDNol}
} | Recent studies developed jailbreaking attacks, which construct jailbreaking prompts to "fool" LLMs into responding to harmful questions.
Early-stage jailbreaking attacks require access to model internals or significant human efforts.
More advanced attacks utilize genetic algorithms for automatic and black-box attacks.
However, the random nature of genetic algorithms significantly limits the effectiveness of these attacks.
In this paper, we propose RLbreaker, a black-box jailbreaking attack driven by deep reinforcement learning (DRL).
We model jailbreaking as a search problem and design an RL agent to guide the search, which is more effective and has less randomness than stochastic search, such as genetic algorithms.
Specifically, we design a customized DRL system for the jailbreaking problem, including a novel reward function and a customized proximal policy optimization (PPO) algorithm.
Through extensive experiments, we demonstrate that RLbreaker is much more effective than existing jailbreaking attacks against six state-of-the-art (SOTA) LLMs.
We also show that RLbreaker is robust against three SOTA defenses and its trained agents can transfer across different LLMs.
We further validate the key design choices of RLbreaker via a comprehensive ablation study. | When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search | [
"Xuan Chen",
"Yuzhou Nie",
"Wenbo Guo",
"Xiangyu Zhang"
] | NeurIPS.cc/2024/Conference | 2406.08705 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FeCWZviCeP | @inproceedings{
lin2024hierarchical,
title={Hierarchical Programmatic Option Framework},
author={Yu-An Lin and Chen-Tao Lee and Chih-Han Yang and Guan-Ting Liu and Shao-Hua Sun},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FeCWZviCeP}
} | Deep reinforcement learning aims to learn deep neural network policies to solve large-scale decision-making problems. However, approximating policies using deep neural networks makes it difficult to interpret the learned decision-making process. To address this issue, prior works (Trivedi et al., 2021; Liu et al., 2023; Carvalho et al., 2024) proposed to use human-readable programs as policies to increase the interpretability of the decision-making pipeline. Nevertheless, programmatic policies generated by these methods struggle to effectively solve long and repetitive RL tasks and cannot generalize to even longer horizons during testing. To solve these problems, we propose the Hierarchical Programmatic Option framework (HIPO), which aims to solve long and repetitive RL problems with human-readable programs as options (low-level policies). Specifically, we propose a method that retrieves a set of effective, diverse, and compatible programs as options. Then, we learn a high-level policy to effectively reuse these programmatic options to solve reoccurring subtasks. Our proposed framework outperforms programmatic RL and deep RL baselines on various tasks. Ablation studies justify the effectiveness of our proposed search algorithm for retrieving a set of programmatic options. | Hierarchical Programmatic Option Framework | [
"Yu-An Lin",
"Chen-Tao Lee",
"Chih-Han Yang",
"Guan-Ting Liu",
"Shao-Hua Sun"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FcUyz33OED | @inproceedings{
wang2024targetguided,
title={Target-Guided Adversarial Point Cloud Transformer Towards Recognition Against Real-world Corruptions},
author={Jie Wang and Tingfa Xu and Lihe Ding and Jianan Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FcUyz33OED}
} | Achieving robust 3D perception in the face of corrupted data presents an challenging hurdle within 3D vision research. Contemporary transformer-based point cloud recognition models, albeit advanced, tend to overfit to specific patterns, consequently undermining their robustness against corruption. In this work, we introduce the Target-Guided Adversarial Point Cloud Transformer, termed APCT, a novel architecture designed to augment global structure capture through an adversarial feature erasing mechanism predicated on patterns discerned at each step during training. Specifically, APCT integrates an Adversarial Significance Identifier and a Target-guided Promptor. The Adversarial Significance Identifier, is tasked with discerning token significance by integrating global contextual analysis, utilizing a structural salience index algorithm alongside an auxiliary supervisory mechanism. The Target-guided Promptor, is responsible for accentuating the propensity for token discard within the self-attention mechanism, utilizing the value derived above, consequently directing the model attention towards alternative segments in subsequent stages. By iteratively applying this strategy in multiple steps during training, the network progressively identifies and integrates an expanded array of object-associated patterns. Extensive experiments demonstrate that our method achieves state-of-the-art results on multiple corruption benchmarks. | Target-Guided Adversarial Point Cloud Transformer Towards Recognition Against Real-world Corruptions | [
"Jie Wang",
"Tingfa Xu",
"Lihe Ding",
"Jianan Li"
] | NeurIPS.cc/2024/Conference | 2411.00462 | [
"https://github.com/roywangj/apct"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FbuODM02ra | @inproceedings{
zhou2024can,
title={Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?},
author={Zhanke Zhou and Rong Tao and Jianing Zhu and Yiwen Luo and Zengmao Wang and Bo Han},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FbuODM02ra}
} | This paper investigates an under-explored challenge in large language models (LLMs): chain-of-thought prompting with noisy rationales, which include irrelevant or inaccurate reasoning thoughts within examples used for in-context learning. We construct NoRa dataset that is tailored to evaluate the robustness of reasoning in the presence of noisy rationales. Our findings on NoRa dataset reveal a prevalent vulnerability to such noise among current LLMs, with existing robust methods like self-correction and self-consistency showing limited efficacy. Notably, compared to prompting with clean rationales, base LLM drops by 1.4%-19.8% in accuracy with irrelevant thoughts and more drastically by 2.2%-40.4% with inaccurate thoughts.
Addressing this challenge necessitates external supervision that should be accessible in practice. Here, we propose the method of contrastive denoising with noisy chain-of-thought (CD-CoT). It enhances LLMs' denoising-reasoning capabilities by contrasting noisy rationales with only one clean rationale, which can be the minimal requirement for denoising-purpose prompting. This method follows a principle of exploration and exploitation: (1) rephrasing and selecting rationales in the input space to achieve explicit denoising and (2) exploring diverse reasoning paths and voting on answers in the output space. Empirically, CD-CoT demonstrates an average improvement of 17.8% in accuracy over the base model and shows significantly stronger denoising capabilities than baseline methods. The source code is publicly available at: https://github.com/tmlr-group/NoisyRationales. | Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales? | [
"Zhanke Zhou",
"Rong Tao",
"Jianing Zhu",
"Yiwen Luo",
"Zengmao Wang",
"Bo Han"
] | NeurIPS.cc/2024/Conference | 2410.23856 | [
"https://github.com/tmlr-group/noisyrationales"
] | https://huggingface.co/papers/2410.23856 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FbXQrfkvtY | @inproceedings{
zhao2024probing,
title={Probing the Decision Boundaries of In-context Learning in Large Language Models},
author={Siyan Zhao and Tung Nguyen and Aditya Grover},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FbXQrfkvtY}
} | In-context learning is an emergent paradigm in large language models (LLMs) that enables them to generalize to new tasks and domains by simply prompting these models with a few exemplars without explicit parameter updates. Many attempts have been made to understand in-context learning in LLMs as a function of model scale, pretraining data, and other factors. In this work, we propose a new mechanism to probe and understand in-context learning from the lens of decision boundaries for in-context binary classification. Decision boundaries are straightforward to visualize and provide important information about the qualitative behavior of the inductive biases of standard classifiers. To our surprise, we find that the decision boundaries learned by current LLMs in simple binary classification tasks are often irregularly non-smooth, regardless of task linearity. This paper investigates the factors influencing these decision boundaries and explores methods to enhance their generalizability. We assess various approaches, including training-free and fine-tuning methods for LLMs, the impact of model architecture, and the effectiveness of active prompting techniques for smoothing decision boundaries in a data-efficient manner. Our findings provide a deeper understanding of in-context learning dynamics and offer practical improvements for enhancing robustness and generalizability of in-context learning. | Probing the Decision Boundaries of In-context Learning in Large Language Models | [
"Siyan Zhao",
"Tung Nguyen",
"Aditya Grover"
] | NeurIPS.cc/2024/Conference | 2406.11233 | [
"https://github.com/siyan-zhao/ICL_decision_boundary"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FbUSCraXEB | @inproceedings{
wang2024efficient,
title={Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously},
author={Yihan Wang and Yifan Zhu and Xiao-Shan Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FbUSCraXEB}
} | Availability attacks provide a tool to prevent the unauthorized use of private data and commercial datasets by generating imperceptible noise and crafting unlearnable examples before release.
Ideally, the obtained unlearnability can prevent algorithms from training usable models.
When supervised learning (SL) algorithms have failed, a malicious data collector possibly resorts to contrastive learning (CL) algorithms to bypass the protection.
Through evaluation, we have found that most existing methods are unable to achieve both supervised and contrastive unlearnability, which poses risks to data protection by availability attacks.
Different from recent methods based on contrastive learning, we employ contrastive-like data augmentations in supervised learning frameworks to obtain attacks effective for both SL and CL.
Our proposed AUE and AAP attacks achieve state-of-the-art worst-case unlearnability across SL and CL algorithms with less computation consumption, showcasing prospects in real-world applications.
The code is available at https://github.com/EhanW/AUE-AAP. | Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously | [
"Yihan Wang",
"Yifan Zhu",
"Xiao-Shan Gao"
] | NeurIPS.cc/2024/Conference | 2402.04010 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Fanbig8DR9 | @inproceedings{
leroux2024euclidean,
title={Euclidean distance compression via deep random features},
author={Brett Leroux and Luis Rademacher},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Fanbig8DR9}
} | Motivated by the problem of compressing point sets into as few bits as possible while maintaining information about approximate distances between points, we construct random nonlinear maps $\varphi_\ell$ that compress point sets in the following way. For a point set $S$, the map $\varphi_\ell:\mathbb{R}^d \to N^{-1/2}\{-1,1\}^N$ has the property that storing $\varphi_\ell(S)$ (a sketch of $S$) allows one to report squared distances between points up to some multiplicative $(1\pm \epsilon)$ error with high probability. The maps $\varphi_\ell$ are the $\ell$-fold composition of a certain type of random feature mapping.
Compared to existing techniques, our maps offer several advantages. The standard method for compressing point sets by random mappings relies on the Johnson-Lindenstrauss lemma and involves compressing point sets with a random linear map. The main advantage of our maps $\varphi_\ell$ over random linear maps is that ours map point sets directly into the discrete cube $N^{-1/2}\{-1,1\}^N$ and so there is no additional step needed to convert the sketch to bits. For some range of parameters, our maps $\varphi_\ell$ produce sketches using fewer bits of storage space. We validate the method with experiments, including an application to nearest neighbor search. | Euclidean distance compression via deep random features | [
"Brett Leroux",
"Luis Rademacher"
] | NeurIPS.cc/2024/Conference | 2403.01327 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FaNhyXY6Y1 | @inproceedings{
qiu2024artemis,
title={Artemis: Towards Referential Understanding in Complex Videos},
author={Jihao Qiu and Yuan Zhang and Xi Tang and Lingxi Xie and Tianren Ma and Pengyu Yan and David Doermann and Qixiang Ye and Yunjie Tian},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FaNhyXY6Y1}
} | Videos carry rich visual information including object description, action, interaction, etc., but the existing multimodal large language models (MLLMs) fell short in referential understanding scenarios such as video-based referring. In this paper, we present Artemis, an MLLM that pushes video-based referential understanding to a finer level. Given a video, Artemis receives a natural-language question with a bounding box in any video frame and describes the referred target in the entire video. The key to achieving this goal lies in extracting compact, target-specific video features, where we set a solid baseline by tracking and selecting spatiotemporal features from the video. We train Artemis on the newly established ViderRef45K dataset with 45K video-QA pairs and design a computationally efficient, three-stage training procedure. Results are promising both quantitatively and qualitatively. Additionally, we show that Artemis can be integrated with video grounding and text summarization tools to understand more complex scenarios. Code and data are available at https://github.com/NeurIPS24Artemis/Artemis. | Artemis: Towards Referential Understanding in Complex Videos | [
"Jihao Qiu",
"Yuan Zhang",
"Xi Tang",
"Lingxi Xie",
"Tianren Ma",
"Pengyu Yan",
"David Doermann",
"Qixiang Ye",
"Yunjie Tian"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FZW7Ctyjm3 | @inproceedings{
deng2024enhancing,
title={Enhancing Large Vision Language Models with Self-Training on Image Comprehension},
author={Yihe Deng and Pan Lu and Fan Yin and Ziniu Hu and Sheng Shen and Quanquan Gu and James Zou and Kai-Wei Chang and Wei Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FZW7Ctyjm3}
} | Large vision language models (LVLMs) integrate large language models (LLMs) with pre-trained vision encoders, thereby activating the perception capability of the model to understand image inputs for different queries and conduct subsequent reasoning. Improving this capability requires high-quality vision-language data, which is costly and labor-intensive to acquire. Self-training approaches have been effective in single-modal settings to alleviate the need for labeled data by leveraging model's own generation. However, effective self-training remains a challenge regarding the unique visual perception and reasoning capability of LVLMs. To address this, we introduce **S**elf-**T**raining on **I**mage **C**omprehension (**STIC**), which emphasizes a self-training approach specifically for image comprehension. First, the model self-constructs a preference dataset for image descriptions using unlabeled images. Preferred responses are generated through a step-by-step prompt, while dis-preferred responses are generated from either corrupted images or misleading prompts. To further self-improve reasoning on the extracted visual information, we let the model reuse a small portion of existing instruction-tuning data and append its self-generated image descriptions to the prompts. We validate the effectiveness of STIC across seven different benchmarks, demonstrating substantial performance gains of 4.0% on average while using 70% less supervised fine-tuning data than the current method. Further studies dive into various components of STIC and highlight its potential to leverage vast quantities of unlabeled images for self-training. | Enhancing Large Vision Language Models with Self-Training on Image Comprehension | [
"Yihe Deng",
"Pan Lu",
"Fan Yin",
"Ziniu Hu",
"Sheng Shen",
"Quanquan Gu",
"James Zou",
"Kai-Wei Chang",
"Wei Wang"
] | NeurIPS.cc/2024/Conference | 2405.19716 | [
""
] | https://huggingface.co/papers/2405.19716 | 1 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FZQYfmsmX9 | @inproceedings{
sharma2024a,
title={A Critical Evaluation of {AI} Feedback for Aligning Large Language Models},
author={Archit Sharma and Sedrick Keh and Eric Mitchell and Chelsea Finn and Kushal Arora and Thomas Kollar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FZQYfmsmX9}
} | Learning from AI feedback (LAIF) is a popular paradigm for improving the instruction-following abilities of powerful pre-trained language models. LAIF first performs supervised fine-tuning (SFT) using demonstrations from a teacher model and then further fine-tunes the model with reinforcement learning (RL) or direct preference optimization (DPO), using feedback from a critic model. While recent popular open-source models have demonstrated substantial improvements in performance from the RL step, in this paper we question whether the complexity of this RL step is truly warranted for AI feedback. We show that the improvements of the RL step are virtually entirely due to the widespread practice of using a weaker teacher model (e.g. GPT-3.5) for SFT data collection than the critic (e.g., GPT-4) used for AI feedback generation. Specifically, we show that simple supervised fine-tuning with GPT-4 as the teacher outperforms existing LAIF pipelines. More generally, we find that the gains from LAIF vary substantially across base model families, test-time evaluation protocols, and critic models. Finally, we provide a mechanistic explanation for when SFT may outperform the full two-step LAIF pipeline as well as suggestions for making LAIF maximally useful in practice. | A Critical Evaluation of AI Feedback for Aligning Large Language Models | [
"Archit Sharma",
"Sedrick Keh",
"Eric Mitchell",
"Chelsea Finn",
"Kushal Arora",
"Thomas Kollar"
] | NeurIPS.cc/2024/Conference | 2402.12366 | [
"https://github.com/architsharma97/dpo-rlaif"
] | https://huggingface.co/papers/2402.12366 | 0 | 3 | 0 | 6 | [] | [
"argilla/OpenHermesPreferences"
] | [
"argilla/synthetic-data-generator",
"osanseviero/distilabel-dataset-generator"
] | [] | [
"argilla/OpenHermesPreferences"
] | [
"argilla/synthetic-data-generator",
"osanseviero/distilabel-dataset-generator"
] | 1 | poster |
null | https://openreview.net/forum?id=FZ45kf5pIA | @inproceedings{
golowich2024edit,
title={Edit Distance Robust Watermarks via Indexing Pseudorandom Codes},
author={Noah Golowich and Ankur Moitra},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FZ45kf5pIA}
} | Motivated by the problem of detecting AI-generated text, we consider the problem of watermarking the output of language models with provable guarantees. We aim for watermarks which satisfy: (a) undetectability, a cryptographic notion introduced by Christ, Gunn, & Zamir (2023) which stipulates that it is computationally hard to distinguish watermarked language model outputs from the model's actual output distribution; and (b) robustness to channels which introduce a constant fraction of adversarial insertions, substitutions, and deletions to the watermarked text. Earlier schemes could only handle stochastic substitutions and deletions, and thus we are aiming for a more natural and appealing robustness guarantee that holds with respect to edit distance.
Our main result is a watermarking scheme which achieves both (a) and (b) when the alphabet size for the language model is allowed to grow as a polynomial in the security parameter. To derive such a scheme, we follow an approach introduced by Christ & Gunn (2024), which proceeds via first constructing pseudorandom codes satisfying undetectability and robustness properties analogous to those above; our codes have the additional benefit of relying on weaker computational assumptions than used in previous work. Then we show that there is a generic transformation from such codes over large alphabets to watermarking schemes for arbitrary language models. | Edit Distance Robust Watermarks via Indexing Pseudorandom Codes | [
"Noah Golowich",
"Ankur Moitra"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FYm8coxdiR | @inproceedings{
wang2024clip,
title={{CLIP} in Mirror: Disentangling text from visual images through reflection},
author={Tiancheng Wang and Yuguang Yang and Linlin Yang and Shaohui Lin and Juan Zhang and Guodong Guo and Baochang Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FYm8coxdiR}
} | The CLIP network excels in various tasks, but struggles with text-visual images i.e., images that contain both text and visual objects; it risks confusing textual and visual representations. To address this issue, we propose MirrorCLIP, a zero-shot framework, which disentangles the image features of CLIP by exploiting the difference in the mirror effect between visual objects and text in the images. Specifically, MirrorCLIP takes both original and flipped images as inputs, comparing their features dimension-wise in the latent space to generate disentangling masks. With disentangling masks, we further design filters to separate textual and visual factors more precisely, and then get disentangled representations. Qualitative experiments using stable diffusion models and class activation mapping (CAM) validate the effectiveness of our disentanglement. Moreover, our proposed MirrorCLIP reduces confusion when encountering text-visual images and achieves a substantial improvement on typographic defense, further demonstrating its superior ability of disentanglement. Our code is available at https://github.com/tcwangbuaa/MirrorCLIP | CLIP in Mirror: Disentangling text from visual images through reflection | [
"Tiancheng Wang",
"Yuguang Yang",
"Linlin Yang",
"Shaohui Lin",
"Juan Zhang",
"Guodong Guo",
"Baochang Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FYLcH4HAZr | @inproceedings{
sun2024streamflow,
title={StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video Sequences},
author={Shangkun Sun and Jiaming Liu and Huaxia Li and Guoqing Liu and Thomas H. Li and Wei Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FYLcH4HAZr}
} | Prior multi-frame optical flow methods typically estimate flow repeatedly in a pair-wise manner, leading to significant computational redundancy. To mitigate this, we implement a Streamlined In-batch Multi-frame (SIM) pipeline, specifically tailored to video inputs to minimize redundant calculations. It enables the simultaneous prediction of successive unidirectional flows in a single forward pass, boosting processing speed by 44.43% and reaching efficiencies on par with two-frame networks. Moreover, we investigate various spatiotemporal modeling methods for optical flow estimation within this pipeline. Notably, we propose a simple yet highly effective parameter-efficient Integrative spatiotemporal Coherence (ISC) modeling method, alongside a lightweight Global Temporal Regressor (GTR) to harness temporal cues. The proposed ISC and GTR bring powerful spatiotemporal modeling capabilities and significantly enhance accuracy, including in occluded areas, while adding modest computations to the SIM pipeline. Compared to the baseline, our approach, StreamFlow, achieves performance enhancements of 15.45% and 11.37% on the Sintel clean and final test sets respectively, with gains of 15.53% and 10.77% on occluded regions and only a 1.11% rise in latency. Furthermore, StreamFlow exhibits state-of-the-art cross-dataset testing results on Sintel and KITTI, demonstrating its robust cross-domain generalization capabilities. The code is available [here](https://github.com/littlespray/StreamFlow). | StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video Sequences | [
"Shangkun Sun",
"Jiaming Liu",
"Huaxia Li",
"Guoqing Liu",
"Thomas H. Li",
"Wei Gao"
] | NeurIPS.cc/2024/Conference | 2311.17099 | [
"https://github.com/littlespray/streamflow"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FY6vPtITtE | @inproceedings{
bonfanti2024the,
title={The Challenges of the Nonlinear Regime for Physics-Informed Neural Networks},
author={Andrea Bonfanti and Giuseppe Bruno and Cristina Cipriani},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FY6vPtITtE}
} | The Neural Tangent Kernel (NTK) viewpoint is widely employed to analyze the training dynamics of overparameterized Physics-Informed Neural Networks (PINNs). However, unlike the case of linear Partial Differential Equations (PDEs), we show how the NTK perspective falls short in the nonlinear scenario. Specifically, we establish that the NTK yields a random matrix at initialization that is not constant during training, contrary to conventional belief. Another significant difference from the linear regime is that, even in the idealistic infinite-width limit, the Hessian does not vanish and hence it cannot be disregarded during training. This motivates the adoption of second-order optimization methods. We explore the convergence guarantees of such methods in both linear and nonlinear cases, addressing challenges such as spectral bias and slow convergence. Every theoretical result is supported by numerical examples with both linear and nonlinear PDEs, and we highlight the benefits of second-order methods in benchmark test cases. | The Challenges of the Nonlinear Regime for Physics-Informed Neural Networks | [
"Andrea Bonfanti",
"Giuseppe Bruno",
"Cristina Cipriani"
] | NeurIPS.cc/2024/Conference | 2402.03864 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FXdMgfCDer | @inproceedings{
niu2024replayandforgetfree,
title={Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach},
author={Chaoxi Niu and Guansong Pang and Ling Chen and Bing Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FXdMgfCDer}
} | Class-incremental learning (CIL) aims to continually learn a sequence of tasks, with each task consisting of a set of unique classes. Graph CIL (GCIL) follows the same setting but needs to deal with graph tasks (e.g., node classification in a graph). The key characteristic of CIL lies in the absence of task identifiers (IDs) during inference, which causes a significant challenge in separating classes from different tasks (i.e., inter-task class separation). Being able to accurately predict the task IDs can help address this issue, but it is a challenging problem. In this paper, we show theoretically that accurate task ID prediction on graph data can be achieved by a Laplacian smoothing-based graph task profiling approach, in which each graph task is modeled by a task prototype based on Laplacian smoothing over the graph. It guarantees that the task prototypes of the same graph task are nearly the same with a large smoothing step, while those of different tasks are distinct due to differences in graph structure and node attributes. Further, to avoid the catastrophic forgetting of the knowledge learned in previous graph tasks, we propose a novel graph prompting approach for GCIL which learns a small discriminative graph prompt for each task, essentially resulting in a separate classification model for each task. The prompt learning requires the training of a single graph neural network (GNN) only once on the first task, and no data replay is required thereafter, thereby obtaining a GCIL model being both replay-free and forget-free. Extensive experiments on four GCIL benchmarks show that i) our task prototype-based method can achieve 100% task ID prediction accuracy on all four datasets, ii) our GCIL model significantly outperforms state-of-the-art competing methods by at least 18% in average CIL accuracy, and iii) our model is fully free of forgetting on the four datasets. | Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach | [
"Chaoxi Niu",
"Guansong Pang",
"Ling Chen",
"Bing Liu"
] | NeurIPS.cc/2024/Conference | 2410.10341 | [
"https://github.com/mala-lab/tpp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FXJDcriMYH | @inproceedings{
du2024stacking,
title={Stacking Your Transformers: A Closer Look at Model Growth for Efficient {LLM} Pre-Training},
author={Wenyu Du and Tongxu Luo and Zihan Qiu and Zeyu Huang and Yikang Shen and Reynold Cheng and Yike Guo and Jie Fu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FXJDcriMYH}
} | LLMs are computationally expensive to pre-train due to their large scale.
Model growth emerges as a promising approach by leveraging smaller models to accelerate the training of larger ones.
However, the viability of these model growth methods in efficient LLM pre-training remains underexplored.
This work identifies three critical $\underline{\textit{O}}$bstacles: ($\textit{O}$1) lack of comprehensive evaluation, ($\textit{O}$2) untested viability for scaling, and ($\textit{O}$3) lack of empirical guidelines.
To tackle $\textit{O}$1, we summarize existing approaches into four atomic growth operators and systematically evaluate them in a standardized LLM pre-training setting.
Our findings reveal that a depthwise stacking operator, called $G_{\text{stack}}$, exhibits remarkable acceleration in training, leading to decreased loss and improved overall performance on eight standard NLP benchmarks compared to strong baselines.
Motivated by these promising results, we conduct extensive experiments to delve deeper into $G_{\text{stack}}$ to address $\textit{O}$2 and $\textit{O}$3.
For $\textit{O}$2 (untested scalability), our study shows that $G_{\text{stack}}$ is scalable and consistently performs well, with experiments up to 7B LLMs after growth and pre-training LLMs with 750B tokens.
For example, compared to a conventionally trained 7B model using 300B tokens, our $G_{\text{stack}}$ model converges to the same loss with 194B tokens, resulting in a 54.6\% speedup.
We further address $\textit{O}$3 (lack of empirical guidelines) by formalizing guidelines to determine growth timing and growth factor for $G_{\text{stack}}$, making it practical in general LLM pre-training.
We also provide in-depth discussions and comprehensive ablation studies of $G_{\text{stack}}$.
Our code and pre-trained model are available at https://llm-stacking.github.io/. | Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training | [
"Wenyu Du",
"Tongxu Luo",
"Zihan Qiu",
"Zeyu Huang",
"Yikang Shen",
"Reynold Cheng",
"Yike Guo",
"Jie Fu"
] | NeurIPS.cc/2024/Conference | 2405.15319 | [
""
] | https://huggingface.co/papers/2405.15319 | 5 | 25 | 1 | 8 | [
"llm-stacking/StackLLM_410M_750BToken"
] | [] | [] | [
"llm-stacking/StackLLM_410M_750BToken"
] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=FVgCwcwpJw | @inproceedings{
zhong2024policy,
title={Policy Improvement using Language Feedback Models},
author={Victor Zhong and Dipendra Misra and Xingdi Yuan and Marc-Alexandre C{\^o}t{\'e}},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FVgCwcwpJw}
} | We introduce Language Feedback Models (LFMs) that identify desirable behaviour --- actions that help achieve tasks specified in the instruction - for imitation learning in instruction following. To train LFMs, we obtain feedback from Large Language Models (LLMs) on visual trajectories verbalized to language descriptions. First, by using LFMs to identify desirable behaviour to imitate, we improve in task-completion rate over strong behavioural cloning baselines on three distinct language grounding environments (Touchdown, ScienceWorld, and ALFWorld). Second, LFMs outperform using LLMs as experts to directly predict actions, when controlling for the number of LLM output tokens. Third, LFMs generalize to unseen environments, improving task-completion rate by 3.5-12.0% through one round of adaptation. Finally, LFMs can be modified to provide human-interpretable feedback without performance loss, allowing human verification of desirable behaviour for imitation learning. | Policy Improvement using Language Feedback Models | [
"Victor Zhong",
"Dipendra Misra",
"Xingdi Yuan",
"Marc-Alexandre Côté"
] | NeurIPS.cc/2024/Conference | 2402.07876 | [
"https://github.com/vzhong/language_feedback_models"
] | https://huggingface.co/papers/2402.07876 | 2 | 5 | 1 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FV4an2OuFM | @inproceedings{
baker2024conditioning,
title={Conditioning non-linear and infinite-dimensional diffusion processes},
author={Elizabeth Louise Baker and Gefan Yang and Michael Lind Severinsen and Christy Anna Hipsley and Stefan Sommer},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FV4an2OuFM}
} | Generative diffusion models and many stochastic models in science and engineering naturally live in infinite dimensions before discretisation. To incorporate observed data for statistical and learning tasks, one needs to condition on observations. While recent work has treated conditioning linear processes in infinite dimensions, conditioning non-linear processes in infinite dimensions has not been explored. This paper conditions function valued stochastic processes without prior discretisation. To do so, we use an infinite-dimensional version of Girsanov's theorem to condition a function-valued stochastic process, leading to a stochastic differential equation (SDE) for the conditioned process involving the score. We apply this technique to do time series analysis for shapes of organisms in evolutionary biology, where we discretise via the Fourier basis and then learn the coefficients of the score function with score matching methods. | Conditioning non-linear and infinite-dimensional diffusion processes | [
"Elizabeth Louise Baker",
"Gefan Yang",
"Michael Lind Severinsen",
"Christy Anna Hipsley",
"Stefan Sommer"
] | NeurIPS.cc/2024/Conference | 2402.01434 | [
"https://github.com/libbylbaker/sdebridge"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=FTpOwIaWUz | @inproceedings{
chan2024on,
title={On Affine Homotopy between Language Encoders},
author={Robin Chan and Reda Boumasmoud and Anej Svete and Yuxin Ren and Qipeng Guo and Zhijing Jin and Shauli Ravfogel and Mrinmaya Sachan and Bernhard Sch{\"o}lkopf and Mennatallah El-Assady and Ryan Cotterell},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FTpOwIaWUz}
} | Pre-trained language encoders---functions that represent text as vectors---are an integral component of many NLP tasks.
We tackle a natural question in language encoder analysis: What does it mean for two encoders to be similar?
We contend that a faithful measure of similarity needs to be \emph{intrinsic}, that is, task-independent, yet still be informative of \emph{extrinsic} similarity---the performance on downstream tasks.
It is common to consider two encoders similar if they are \emph{homotopic}, i.e., if they can be aligned through some transformation.
In this spirit, we study the properties of \emph{affine} alignment of language encoders and its implications on extrinsic similarity.
We find that while affine alignment is fundamentally an asymmetric notion of similarity, it is still informative of extrinsic similarity.
We confirm this on datasets of natural language representations.
Beyond providing useful bounds on extrinsic similarity, affine intrinsic similarity also allows us to begin uncovering the structure of the space of pre-trained encoders by defining an order over them. | On Affine Homotopy between Language Encoders | [
"Robin Chan",
"Reda Boumasmoud",
"Anej Svete",
"Yuxin Ren",
"Qipeng Guo",
"Zhijing Jin",
"Shauli Ravfogel",
"Mrinmaya Sachan",
"Bernhard Schölkopf",
"Mennatallah El-Assady",
"Ryan Cotterell"
] | NeurIPS.cc/2024/Conference | 2406.02329 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FTpKGuxEfy | @inproceedings{
chen2024vision,
title={Vision Foundation Model Enables Generalizable Object Pose Estimation},
author={Kai Chen and Yiyao Ma and Xingyu Lin and Stephen James and Jianshu Zhou and Yun-Hui Liu and Pieter Abbeel and Qi Dou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FTpKGuxEfy}
} | Object pose estimation plays a crucial role in robotic manipulation, however, its practical applicability still suffers from limited generalizability. This paper addresses the challenge of generalizable object pose estimation, particularly focusing on category-level object pose estimation for unseen object categories. Current methods either require impractical instance-level training or are confined to predefined categories, limiting their applicability. We propose VFM-6D, a novel framework that explores harnessing existing vision and language models, to elaborate object pose estimation into two stages: category-level object viewpoint estimation and object coordinate map estimation. Based on the two-stage framework, we introduce a 2D-to-3D feature lifting module and a shape-matching module, both of which leverage pre-trained vision foundation models to improve object representation and matching accuracy. VFM-6D is trained on cost-effective synthetic data and exhibits superior generalization capabilities. It can be applied to both instance-level unseen object pose estimation and category-level object pose estimation for novel categories. Evaluations on benchmark datasets demonstrate the effectiveness and versatility of VFM-6D in various real-world scenarios. | Vision Foundation Model Enables Generalizable Object Pose Estimation | [
"Kai Chen",
"Yiyao Ma",
"Xingyu Lin",
"Stephen James",
"Jianshu Zhou",
"Yun-Hui Liu",
"Pieter Abbeel",
"Qi Dou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FTPDBQuT4G | @inproceedings{
sawarni2024generalized,
title={Generalized Linear Bandits with Limited Adaptivity},
author={Ayush Sawarni and Nirjhar Das and Siddharth Barman and Gaurav Sinha},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FTPDBQuT4G}
} | We study the generalized linear contextual bandit problem within the constraints of limited adaptivity. In this paper, we present two algorithms, B-GLinCB and RS-GLinCB, that address, respectively, two prevalent limited adaptivity settings. Given a budget $M$ on the number of policy updates, in the first setting, the algorithm needs to decide upfront $M$ rounds at which it will update its policy, while in the second setting it can adaptively perform $M$ policy updates during its course. For the first setting, we design an algorithm B-GLinCB, that incurs $\tilde{O}(\sqrt{T})$ regret when $M = \Omega( \log{\log T} )$ and the arm feature vectors are generated stochastically. For the second setting, we design an algorithm RS-GLinCB that updates its policy $\tilde{O}(\log^2 T)$ times and achieves a regret of $\tilde{O}(\sqrt{T})$ even when the arm feature vectors are adversarially generated. Notably, in these bounds, we manage to eliminate the dependence on a key instance dependent parameter $\kappa$, that captures non-linearity of the underlying reward model. Our novel approach for removing this dependence for generalized linear contextual bandits might be of independent interest. | Generalized Linear Bandits with Limited Adaptivity | [
"Ayush Sawarni",
"Nirjhar Das",
"Siddharth Barman",
"Gaurav Sinha"
] | NeurIPS.cc/2024/Conference | 2404.06831 | [
"https://github.com/nirjhar-das/glbandit_limited_adaptivity"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=FSgwgQXTxo | @inproceedings{
liu2024reasoning,
title={Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving},
author={Haochen Liu and Li Chen and Yu Qiao and Chen Lv and Hongyang Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FSgwgQXTxo}
} | Autonomous driving system aims for safe and social-consistent driving through the behavioral integration among interactive agents. However, challenges remain due to multi-agent scene uncertainty and heterogeneous interaction. Current dense and sparse behavioral representations struggle with inefficiency and inconsistency in multi-agent modeling, leading to instability of collective behavioral patterns when integrating prediction and planning (IPP). To address this, we initiate a topological formation that serves as a compliant behavioral foreground to guide downstream trajectory generations. Specifically, we introduce Behavioral Topology (BeTop), a pivotal topological formulation that explicitly represents the consensual behavioral pattern among multi-agent future. BeTop is derived from braid theory to distill compliant interactive topology from multi-agent future trajectories. A synergistic learning framework (BeTopNet) supervised by BeTop facilitates the consistency of behavior prediction and planning within the predicted topology priors. Through imitative contingency learning, BeTop also effectively manages behavioral uncertainty for prediction and planning. Extensive verification on large-scale real-world datasets, including nuPlan and WOMD, demonstrates that BeTop achieves state-of-the-art performance in both prediction and planning tasks. Further validations on the proposed interactive scenario benchmark showcase planning compliance in interactive cases. Code and model is available at https://github.com/OpenDriveLab/BeTop. | Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving | [
"Haochen Liu",
"Li Chen",
"Yu Qiao",
"Chen Lv",
"Hongyang Li"
] | NeurIPS.cc/2024/Conference | 2409.18031 | [
"https://github.com/OpenDriveLab/BeTop"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FOvZztnp1H | @inproceedings{
liu2024autotimes,
title={AutoTimes: Autoregressive Time Series Forecasters via Large Language Models},
author={Yong Liu and Guo Qin and Xiangdong Huang and Jianmin Wang and Mingsheng Long},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FOvZztnp1H}
} | Foundation models of time series have not been fully developed due to the limited availability of time series corpora and the underexploration of scalable pre-training. Based on the similar sequential formulation of time series and natural language, increasing research demonstrates the feasibility of leveraging large language models (LLM) for time series. Nevertheless, the inherent autoregressive property and decoder-only architecture of LLMs have not been fully considered, resulting in insufficient utilization of LLM abilities. To fully revitalize the general-purpose token transition and multi-step generation capability of large language models, we propose AutoTimes to repurpose LLMs as autoregressive time series forecasters, which projects time series into the embedding space of language tokens and autoregressively generates future predictions with arbitrary lengths. Compatible with any decoder-only LLMs, the consequent forecaster exhibits the flexibility of the lookback length and scalability with larger LLMs. Further, we formulate time series as prompts, extending the context for prediction beyond the lookback window, termed in-context forecasting. By introducing LLM-embedded textual timestamps, AutoTimes can utilize chronological information to align multivariate time series. Empirically, AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over $5\times$ training/inference speedup compared to advanced LLM-based forecasters. Code is available at this repository: https://github.com/thuml/AutoTimes. | AutoTimes: Autoregressive Time Series Forecasters via Large Language Models | [
"Yong Liu",
"Guo Qin",
"Xiangdong Huang",
"Jianmin Wang",
"Mingsheng Long"
] | NeurIPS.cc/2024/Conference | 2402.02370 | [
"https://github.com/thuml/AutoTimes"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FOkKndty5B | @inproceedings{
nie2024slowfocus,
title={SlowFocus: Enhancing Fine-grained Temporal Understanding in Video {LLM}},
author={Ming Nie and Dan Ding and Chunwei Wang and Yuanfan Guo and Jianhua Han and Hang Xu and Li Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FOkKndty5B}
} | Large language models (LLMs) have demonstrated exceptional capabilities in text understanding, which has paved the way for their expansion into video LLMs (Vid-LLMs) to analyze video data. However, current Vid-LLMs struggle to simultaneously retain high-quality frame-level semantic information (i.e., a sufficient number of tokens per frame) and comprehensive video-level temporal information (i.e., an adequate number of sampled frames per video). This limitation hinders the advancement of Vid-LLMs towards fine-grained video understanding. To address this issue, we introduce the SlowFocus mechanism, which significantly enhances the equivalent sampling frequency without compromising the quality of frame-level visual tokens. SlowFocus begins by identifying the query-related temporal segment based on the posed question, then performs dense sampling on this segment to extract local high-frequency features. A multi-frequency mixing attention module is further leveraged to aggregate these local high-frequency details with global low-frequency contexts for enhanced temporal comprehension. Additionally, to tailor Vid-LLMs to this innovative mechanism, we introduce a set of training strategies aimed at bolstering both temporal grounding and detailed temporal reasoning capabilities. Furthermore, we establish FineAction-CGR, a benchmark specifically devised to assess the ability of Vid-LLMs to process fine-grained temporal understanding tasks. Comprehensive experiments demonstrate the superiority of our mechanism across both existing public video understanding benchmarks and our proposed FineAction-CGR. | SlowFocus: Enhancing Fine-grained Temporal Understanding in Video LLM | [
"Ming Nie",
"Dan Ding",
"Chunwei Wang",
"Yuanfan Guo",
"Jianhua Han",
"Hang Xu",
"Li Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FOfU3qhcIG | @inproceedings{
feuer2024tunetables,
title={TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks},
author={Benjamin Feuer and Robin Tibor Schirrmeister and Valeriia Cherepanova and Chinmay Hegde and Frank Hutter and Micah Goldblum and Niv Cohen and Colin White},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FOfU3qhcIG}
} | While tabular classification has traditionally relied on from-scratch training, a recent breakthrough called prior-data fitted networks (PFNs) challenges this approach. Similar to large language models, PFNs make use of pretraining and in-context learning to achieve strong performance on new tasks in a single forward pass. However, current PFNs have limitations that prohibit their widespread adoption. Notably, TabPFN achieves very strong performance on small tabular datasets but is not designed to make predictions for datasets of size larger than 1000. In this work, we overcome these limitations and substantially improve the performance of PFNs via context optimization. We introduce TuneTables, a parameter-efficient fine-tuning strategy for PFNs that compresses large datasets into a smaller learned context. We conduct extensive experiments on nineteen algorithms over 98 datasets and find that TuneTables achieves the best performance on average, outperforming boosted trees such as CatBoost, while optimizing fewer than 5\% of TabPFN's parameters. Furthermore, we show that TuneTables can be used as an interpretability tool and can even be used to mitigate biases by optimizing a fairness objective. | TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks | [
"Benjamin Feuer",
"Robin Tibor Schirrmeister",
"Valeriia Cherepanova",
"Chinmay Hegde",
"Frank Hutter",
"Micah Goldblum",
"Niv Cohen",
"Colin White"
] | NeurIPS.cc/2024/Conference | 2402.11137 | [
"https://github.com/penfever/tabpfn-pt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FOTMgW8w5t | @inproceedings{
shi2024using,
title={Using Surrogates in Covariate-adjusted Response-adaptive Randomization Experiments with Delayed Outcomes},
author={Lei Shi and Waverly Wei and Jingshen Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FOTMgW8w5t}
} | Covariate-adjusted response-adaptive randomization (CARA) designs are gaining increasing attention. These designs combine the advantages of randomized experiments with the ability to adaptively revise treatment allocations based on data collected across multiple stages, enhancing estimation efficiency. Yet, CARA designs often assume that primary outcomes are immediately observable, which is not the case in many clinical scenarios where there is a delay in observing primary outcomes. This assumption can lead to significant missingness and inefficient estimation of treatment effects. To tackle this practical challenge, we propose a CARA experimental strategy integrating delayed primary outcomes with immediately observed surrogate outcomes. Surrogate outcomes are intermediate clinical outcomes that are predictive or correlated with the primary outcome of interest. Our design goal is to improve the estimation efficiency of the average treatment effect (ATE) of the primary outcome utilizing surrogate outcomes. From a methodological perspective, our approach offers two benefits: First, we accommodate arm and covariates-dependent delay mechanisms without imposing any parametric modeling assumptions on the distribution of outcomes. Second, when primary outcomes are not fully observed, surrogate outcomes can guide the adaptive treatment allocation rule. From a theoretical standpoint, we prove the semiparametric efficiency bound of estimating ATE under delayed primary outcomes while incorporating surrogate outcomes. We show that the ATE estimator under our proposed design strategy attains this semiparametric efficiency bound and achieves asymptotic normality. Through theoretical investigations and a synthetic HIV study, we show that our design is more efficient than the design without incorporating any surrogate information. | Using Surrogates in Covariate-adjusted Response-adaptive Randomization Experiments with Delayed Outcomes | [
"Lei Shi",
"Waverly Wei",
"Jingshen Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FNzpVTpNbN | @inproceedings{
sun2024diffusionfake,
title={DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion},
author={Ke Sun and Shen Chen and Taiping Yao and Hong Liu and Xiaoshuai Sun and Shouhong Ding and Rongrong Ji},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FNzpVTpNbN}
} | The rapid progress of Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content. Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations. In this paper, we revisit the generation process and identify a universal principle: Deepfake images inherently contain information from both source and target identities, while genuine faces maintain a consistent identity. Building upon this insight, we introduce DiffusionFake, a novel plug-and-play framework that reverses the generative process of face forgeries to enhance the generalization of detection models. DiffusionFake achieves this by injecting the features extracted by the detection model into a frozen pre-trained Stable Diffusion model, compelling it to reconstruct the corresponding target and source images. This guided reconstruction process constrains the detection network to capture the source and target related features to facilitate the reconstruction, thereby learning rich and disentangled representations that are more resilient to unseen forgeries. Extensive experiments demonstrate that DiffusionFake significantly improves cross-domain generalization of various detector architectures without introducing additional parameters during inference. The code are available in https://github.com/skJack/DiffusionFake.git. | DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion | [
"Ke Sun",
"Shen Chen",
"Taiping Yao",
"Hong Liu",
"Xiaoshuai Sun",
"Shouhong Ding",
"Rongrong Ji"
] | NeurIPS.cc/2024/Conference | 2410.04372 | [
"https://github.com/skjack/diffusionfake"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FNtsZLwkGr | @inproceedings{
hossain2024pruning,
title={Pruning neural network models for gene regulatory dynamics using data and domain knowledge},
author={Intekhab Hossain and Jonas Fischer and Rebekka Burkholz and John Quackenbush},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FNtsZLwkGr}
} | The practical utility of machine learning models in the sciences often hinges on their interpretability. It is common to assess a model's merit for scientific discovery, and thus novel insights, by how well it aligns with already available domain knowledge - a dimension that is currently largely disregarded in the comparison of neural network models. While pruning can simplify deep neural network architectures and excels in identifying sparse models, as we show in the context of gene regulatory network inference, state-of-the-art techniques struggle with biologically meaningful structure learning. To address this issue, we propose DASH, a generalizable framework that guides network pruning by using domain-specific structural information in model fitting and leads to sparser, better interpretable models that are more robust to noise. Using both synthetic data with ground truth information, as well as real-world gene expression data, we show that DASH, using knowledge about gene interaction partners within the putative regulatory network, outperforms general pruning methods by a large margin and yields deeper insights into the biological systems being studied. | Pruning neural network models for gene regulatory dynamics using data and domain knowledge | [
"Intekhab Hossain",
"Jonas Fischer",
"Rebekka Burkholz",
"John Quackenbush"
] | NeurIPS.cc/2024/Conference | 2403.04805 | [
"https://github.com/quackenbushlab/dash"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FNOBf6JM7r | @inproceedings{
wu2024stabilizing,
title={Stabilizing Linear Passive-Aggressive Online Learning with Weighted Reservoir Sampling},
author={Skyler Wu and Fred Lu and Edward Raff and James Holt},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FNOBf6JM7r}
} | Online learning methods, like the seminal Passive-Aggressive (PA) classifier, are still highly effective for high-dimensional streaming data, out-of-core processing, and other throughput-sensitive applications. Many such algorithms rely on fast adaptation to individual errors as a key to their convergence. While such algorithms enjoy low theoretical regret, in real-world deployment they can be sensitive to individual outliers that cause the algorithm to over-correct. When such outliers occur at the end of the data stream, this can cause the final solution to have unexpectedly low accuracy. We design a weighted reservoir sampling (WRS) approach to obtain a stable ensemble model from the sequence of solutions without requiring additional passes over the data, hold-out sets, or a growing amount of memory. Our key insight is that good solutions tend to be error-free for more iterations than bad solutions, and thus, the number of passive rounds provides an estimate of a solution's relative quality. Our reservoir thus contains $K$ previous intermediate weight vectors with high survival times. We demonstrate our WRS approach on the Passive-Aggressive Classifier (PAC) and First-Order Sparse Online Learning (FSOL), where our method consistently and significantly outperforms the unmodified approach. We show that the risk of the ensemble classifier is bounded with respect to the regret of the underlying online learning method. | Stabilizing Linear Passive-Aggressive Online Learning with Weighted Reservoir Sampling | [
"Skyler Wu",
"Fred Lu",
"Edward Raff",
"James Holt"
] | NeurIPS.cc/2024/Conference | 2410.23601 | [
"https://github.com/FutureComputing4AI/Weighted-Reservoir-Sampling-Augmented-Training"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FMrNus3d0n | @inproceedings{
yang2024guardti,
title={GuardT2I: Defending Text-to-Image Models from Adversarial Prompts},
author={Yijun Yang and Ruiyuan Gao and Xiao Yang and Jianyuan Zhong and Qiang Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FMrNus3d0n}
} | Recent advancements in Text-to-Image models have raised significant safety concerns about their potential misuse for generating inappropriate or Not-Safe-For-Work contents, despite existing countermeasures such as Not-Safe-For-Work classifiers or model fine-tuning for inappropriate concept removal. Addressing this challenge, our study unveils GuardT2I a novel moderation framework that adopts a generative approach to enhance Text-to-Image models’ robustness against adversarial prompts. Instead of making a binary classification, GuardT2I utilizes a large language model to conditionally transform text guidance embeddings within the Text-to-Image models into natural language for effective adversarial prompt detection, without compromising the models’ inherent performance. Our extensive experiments reveal that GuardT2I outperforms leading commercial solutions like OpenAI-Moderation and Microsoft Azure Moderator by a significant margin across diverse adversarial scenarios. Our framework is available at https://github.com/cure-lab/GuardT2I. | GuardT2I: Defending Text-to-Image Models from Adversarial Prompts | [
"Yijun Yang",
"Ruiyuan Gao",
"Xiao Yang",
"Jianyuan Zhong",
"Qiang Xu"
] | NeurIPS.cc/2024/Conference | 2403.01446 | [
"https://github.com/cure-lab/guardt2i"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FLNnlfBGMo | @inproceedings{
shi2024efficient,
title={Efficient Prompt Optimization Through the Lens of Best Arm Identification},
author={Chengshuai Shi and Kun Yang and Zihan Chen and Jundong Li and Jing Yang and Cong Shen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FLNnlfBGMo}
} | The remarkable instruction-following capability of large language models (LLMs) has sparked a growing interest in automatically finding good prompts, i.e., prompt optimization. Most existing works follow the scheme of selecting from a pre-generated pool of candidate prompts. However, these designs mainly focus on the generation strategy, while limited attention has been paid to the selection method. Especially, the cost incurred during the selection (e.g., accessing LLM and evaluating the responses) is rarely explicitly considered. To overcome this limitation, this work provides a principled framework, TRIPLE, to efficiently perform prompt selection under an explicit budget constraint. TRIPLE is built on a novel connection established between prompt optimization and fixed-budget best arm identification (BAI-FB) in multi-armed bandits (MAB); thus, it is capable of leveraging the rich toolbox from BAI-FB systematically and also incorporating unique characteristics of prompt optimization. Extensive experiments on multiple well-adopted tasks using various LLMs demonstrate the remarkable performance improvement of TRIPLE over baselines while satisfying the limited budget constraints. As an extension, variants of TRIPLE are proposed to efficiently select examples for few-shot prompts, also achieving superior empirical performance. | Efficient Prompt Optimization Through the Lens of Best Arm Identification | [
"Chengshuai Shi",
"Kun Yang",
"Zihan Chen",
"Jundong Li",
"Jing Yang",
"Cong Shen"
] | NeurIPS.cc/2024/Conference | 2402.09723 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FJlrSZBMCD | @inproceedings{
bick2024transformers,
title={Transformers to {SSM}s: Distilling Quadratic Knowledge to Subquadratic Models},
author={Aviv Bick and Kevin Li and Eric P. Xing and J Zico Kolter and Albert Gu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FJlrSZBMCD}
} | Transformer architectures have become a dominant paradigm for domains like language modeling but suffer in many inference settings due to their quadratic-time self-attention. Recently proposed subquadratic architectures, such as Mamba, have shown promise, but have been pretrained with substantially less computational resources than the strongest Transformer models. In this work, we present a method that is able to distill a pretrained Transformer architecture into alternative architectures such as state space models (SSMs). The key idea to our approach is that we can view both Transformers and SSMs as applying different forms of mixing matrices over the token sequences. We can thus progressively distill the Transformer architecture by matching different degrees of granularity in the SSM: first matching the mixing matrices themselves, then the hidden units at each block, and finally the end-to-end predictions. Our method, called MOHAWK, is able to distill a Mamba-2 variant based on the Phi-1.5 architecture (Phi-Mamba) using only 3B tokens. Despite using less than 1% of the training data typically used to train models from scratch, Phi-Mamba boasts substantially stronger performance compared to all past open-source non-Transformer models. MOHAWK allows models like SSMs to leverage computational resources invested in training Transformer-based architectures, highlighting a new avenue for building such models. | Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models | [
"Aviv Bick",
"Kevin Li",
"Eric P. Xing",
"J Zico Kolter",
"Albert Gu"
] | NeurIPS.cc/2024/Conference | 2408.10189 | [
""
] | https://huggingface.co/papers/2408.10189 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FIs87Iro9j | @inproceedings{
jawade2024proxyfusion,
title={ProxyFusion: Face Feature Aggregation Through Sparse Experts},
author={Bhavin Jawade and Alexander Stone and Deen Dayal Mohan and Xiao Wang and Srirangaraj Setlur and Venu Govindaraju},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FIs87Iro9j}
} | Face feature fusion is indispensable for robust face recognition, particularly in scenarios involving long-range, low-resolution media (unconstrained environments) where not all frames or features are equally informative. Existing methods often rely on large intermediate feature maps or face metadata information, making them incompatible with legacy biometric template databases that store pre-computed features. Additionally, real-time inference and generalization to large probe sets remains challenging.
To address these limitations, we introduce a linear time O(N) proxy based sparse expert selection and pooling approach for context driven feature-set attention. Our approach is order invariant on the feature-set, generalizes to large sets, is compatible with legacy template stores, and utilizes significantly less parameters making it suitable real-time inference and edge use-cases. Through qualitative experiments, we demonstrate that ProxyFusion learns discriminative information for importance weighting of face features without relying on intermediate features. Quantitative evaluations on challenging low-resolution face verification datasets such as IARPA BTS3.1 and DroneSURF show the superiority of ProxyFusion in unconstrained long-range face recognition setting.
Our code and pretrained models are available at: https://github.com/bhavinjawade/ProxyFusion | ProxyFusion: Face Feature Aggregation Through Sparse Experts | [
"Bhavin Jawade",
"Alexander Stone",
"Deen Dayal Mohan",
"Xiao Wang",
"Srirangaraj Setlur",
"Venu Govindaraju"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FGTDe6EA0B | @inproceedings{
kleinberg2024language,
title={Language Generation in the Limit},
author={Jon Kleinberg and Sendhil Mullainathan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FGTDe6EA0B}
} | Although current large language models are complex, the most basic specifications of the underlying language generation problem itself are simple to state: given a finite set of training samples from an unknown language, produce valid new strings from the language that don't already appear in the training data. Here we ask what we can conclude about language generation using only this specification, without further assumptions. In particular, suppose that an adversary enumerates the strings of an unknown target language L that is known only to come from one of a possibly infinite list of candidates. A computational agent is trying to learn to generate from this language; we say that the agent generates from $L$ in the limit if after some finite point in the enumeration of $L$, the agent is able to produce new elements that come exclusively from $L$ and that have not yet been presented by the adversary. Our main result is that there is an agent that is able to generate in the limit for every countable list of candidate languages. This contrasts dramatically with negative results due to Gold and Angluin in a well-studied model of language learning where the goal is to identify an unknown language from samples; the difference between these results suggests that identifying a language is a fundamentally different problem than generating from it. | Language Generation in the Limit | [
"Jon Kleinberg",
"Sendhil Mullainathan"
] | NeurIPS.cc/2024/Conference | 2404.06757 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=FGJb0peY4R | @inproceedings{
jiang2024unveil,
title={Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization},
author={Jiarui Jiang and Wei Huang and Miao Zhang and Taiji Suzuki and Liqiang Nie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FGJb0peY4R}
} | Transformers have demonstrated great power in the recent development of large foundational models. In particular, the Vision Transformer (ViT) has brought revolutionary changes to the field of vision, achieving significant accomplishments on the experimental side. However, their theoretical capabilities, particularly in terms of generalization when trained to overfit training data, are still not fully understood. To address this gap, this work delves deeply into the \textit{benign overfitting} perspective of transformers in vision. To this end, we study the optimization of a Transformer composed of a self-attention layer with softmax followed by a fully connected layer under gradient descent on a certain data distribution model. By developing techniques that address the challenges posed by softmax and the interdependent nature of multiple weights in transformer optimization, we successfully characterized the training dynamics and achieved generalization in post-training. Our results establish a sharp condition that can distinguish between the small test error phase and the large test error regime, based on the signal-to-noise ratio in the data model. The theoretical results are further verified by experimental simulation. To the best of our knowledge, this is the first work to characterize benign overfitting for Transformers. | Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization | [
"Jiarui Jiang",
"Wei Huang",
"Miao Zhang",
"Taiji Suzuki",
"Liqiang Nie"
] | NeurIPS.cc/2024/Conference | 2409.19345 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FFW6rPz48Z | @inproceedings{
ilbert2024analysing,
title={Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting},
author={Romain Ilbert and Malik Tiomoko and Cosme Louart and Ambroise Odonnat and Vasilii Feofanov and Themis Palpanas and Ievgen Redko},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FFW6rPz48Z}
} | In this paper, we introduce a novel theoretical framework for multi-task regression, applying random matrix theory to provide precise performance estimations, under high-dimensional, non-Gaussian data distributions. We formulate a multi-task optimization problem as a regularization technique to enable single-task models to leverage multi-task learning information. We derive a closed-form solution for multi-task optimization in the context of linear models. Our analysis provides valuable insights by linking the multi-task learning performance to various model statistics such as raw data covariances, signal-generating hyperplanes, noise levels, as well as the size and number of datasets. We finally propose a consistent estimation of training and testing errors, thereby offering a robust foundation for hyperparameter optimization in multi-task regression scenarios. Experimental validations on both synthetic and real-world datasets in regression and multivariate time series forecasting demonstrate improvements on univariate models, incorporating our method into the training loss and thus leveraging multivariate information. | Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting | [
"Romain Ilbert",
"Malik Tiomoko",
"Cosme Louart",
"Ambroise Odonnat",
"Vasilii Feofanov",
"Themis Palpanas",
"Ievgen Redko"
] | NeurIPS.cc/2024/Conference | 2406.10327 | [
""
] | https://huggingface.co/papers/2406.10327 | 1 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=FFJFGx78OK | @inproceedings{
he2024consistency,
title={Consistency Diffusion Bridge Models},
author={Guande He and Kaiwen Zheng and Jianfei Chen and Fan Bao and Jun Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FFJFGx78OK}
} | Diffusion models (DMs) have become the dominant paradigm of generative modeling in a variety of domains by learning stochastic processes from noise to data. Recently, diffusion denoising bridge models (DDBMs), a new formulation of generative modeling that builds stochastic processes between fixed data endpoints based on a reference diffusion process, have achieved empirical success across tasks with coupled data distribution, such as image-to-image translation. However, DDBM's sampling process typically requires hundreds of network evaluations to achieve decent performance, which may impede their practical deployment due to high computational demands. In this work, inspired by the recent advance of consistency models in DMs, we tackle this problem by learning the consistency function of the probability-flow ordinary differential equation (PF-ODE) of DDBMs, which directly predicts the solution at a starting step given any point on the ODE trajectory. Based on a dedicated general-form ODE solver, we propose two paradigms: consistency bridge distillation and consistency bridge training, which is flexible to apply on DDBMs with broad design choices. Experimental results show that our proposed method could sample $4\times$ to $50\times$ faster than the base DDBM and produce better visual quality given the same step in various tasks with pixel resolution ranging from $64 \times 64$ to $256 \times 256$, as well as supporting downstream tasks such as semantic interpolation in the data space. | Consistency Diffusion Bridge Models | [
"Guande He",
"Kaiwen Zheng",
"Jianfei Chen",
"Fan Bao",
"Jun Zhu"
] | NeurIPS.cc/2024/Conference | 2410.22637 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FExX8pMrdT | @inproceedings{
wang2024autosurvey,
title={AutoSurvey: Large Language Models Can Automatically Write Surveys},
author={Yidong Wang and Qi Guo and Wenjin Yao and Hongbo Zhang and Xin Zhang and Zhen Wu and Meishan Zhang and Xinyu Dai and Min zhang and Qingsong Wen and Wei Ye and Shikun Zhang and Yue Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FExX8pMrdT}
} | This paper introduces AutoSurvey, a speedy and well-organized methodology for automating the creation of comprehensive literature surveys in rapidly evolving fields like artificial intelligence. Traditional survey paper creation faces challenges due to the vast volume and complexity of information, prompting the need for efficient survey methods. While large language models (LLMs) offer promise in automating this process, challenges such as context window limitations, parametric knowledge constraints, and the lack of evaluation benchmarks remain. AutoSurvey addresses these challenges through a systematic approach that involves initial retrieval and outline generation, subsection drafting by specialized LLMs, integration and refinement, and rigorous evaluation and iteration. Our contributions include a comprehensive solution to the survey problem, a reliable evaluation method, and experimental validation demonstrating AutoSurvey's effectiveness. | AutoSurvey: Large Language Models Can Automatically Write Surveys | [
"Yidong Wang",
"Qi Guo",
"Wenjin Yao",
"Hongbo Zhang",
"Xin Zhang",
"Zhen Wu",
"Meishan Zhang",
"Xinyu Dai",
"Min zhang",
"Qingsong Wen",
"Wei Ye",
"Shikun Zhang",
"Yue Zhang"
] | NeurIPS.cc/2024/Conference | 2406.10252 | [
"https://github.com/autosurveys/autosurvey"
] | https://huggingface.co/papers/2406.10252 | 2 | 1 | 0 | 13 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FEmag0szWo | @inproceedings{
chen2024rethinking,
title={Rethinking the Capacity of Graph Neural Networks for Branching Strategy},
author={Ziang Chen and Jialin Liu and Xiaohan Chen and Xinshang Wang and Wotao Yin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FEmag0szWo}
} | Graph neural networks (GNNs) have been widely used to predict properties and heuristics of mixed-integer linear programs (MILPs) and hence accelerate MILP solvers. This paper investigates the capacity of GNNs to represent strong branching (SB), the most effective yet computationally expensive heuristic employed in the branch-and-bound algorithm. In the literature, message-passing GNN (MP-GNN), as the simplest GNN structure, is frequently used as a fast approximation of SB and we find that not all MILPs's SB can be represented with MP-GNN. We precisely define a class of ``MP-tractable" MILPs for which MP-GNNs can accurately approximate SB scores. Particularly, we establish a universal approximation theorem: for any data distribution over the MP-tractable class, there always exists an MP-GNN that can approximate the SB score with arbitrarily high accuracy and arbitrarily high probability, which lays a theoretical foundation of the existing works on imitating SB with MP-GNN. For MILPs without the MP-tractability, unfortunately, a similar result is impossible, which can be illustrated by two MILP instances with different SB scores that cannot be distinguished by any MP-GNN, regardless of the number of parameters. Recognizing this, we explore another GNN structure called the second-order folklore GNN (2-FGNN) that overcomes this limitation, and the aforementioned universal approximation theorem can be extended to the entire MILP space using 2-FGNN, regardless of the MP-tractability. A small-scale numerical experiment is conducted to directly validate our theoretical findings. | Rethinking the Capacity of Graph Neural Networks for Branching Strategy | [
"Ziang Chen",
"Jialin Liu",
"Xiaohan Chen",
"Xinshang Wang",
"Wotao Yin"
] | NeurIPS.cc/2024/Conference | 2402.07099 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FDfrPugkGU | @inproceedings{
xu2024dofit,
title={Do{FIT}: Domain-aware Federated Instruction Tuning with Alleviated Catastrophic Forgetting},
author={Binqian Xu and Xiangbo Shu and Haiyang Mei and Zechen Bai and Basura Fernando and Mike Zheng Shou and Jinhui Tang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FDfrPugkGU}
} | Federated Instruction Tuning (FIT) advances collaborative training on decentralized data, crucially enhancing model's capability and safeguarding data privacy. However, existing FIT methods are dedicated to handling data heterogeneity across different clients (i.e., client-aware data heterogeneity), while ignoring the variation between data from different domains (i.e., domain-aware data heterogeneity). When scarce data needs supplementation from related fields, these methods lack the ability to handle domain heterogeneity in cross-domain training. This leads to domain-information catastrophic forgetting in collaborative training and therefore makes model perform sub-optimally on the individual domain. To address this issue, we introduce DoFIT, a new Domain-aware FIT framework that alleviates catastrophic forgetting through two new designs. First, to reduce interference information from the other domain, DoFIT finely aggregates overlapping weights across domains on the inter-domain server side. Second, to retain more domain information, DoFIT initializes intra-domain weights by incorporating inter-domain information into a less-conflicted parameter space. Experimental results on diverse datasets consistently demonstrate that DoFIT excels in cross-domain collaborative training and exhibits significant advantages over conventional FIT methods in alleviating catastrophic forgetting. Code is available at [this link](https://github.com/1xbq1/DoFIT). | DoFIT: Domain-aware Federated Instruction Tuning with Alleviated Catastrophic Forgetting | [
"Binqian Xu",
"Xiangbo Shu",
"Haiyang Mei",
"Zechen Bai",
"Basura Fernando",
"Mike Zheng Shou",
"Jinhui Tang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FCsEvaMorw | @inproceedings{
samvelyan2024rainbow,
title={Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts},
author={Mikayel Samvelyan and Sharath Chandra Raparthy and Andrei Lupu and Eric Hambro and Aram H. Markosyan and Manish Bhatt and Yuning Mao and Minqi Jiang and Jack Parker-Holder and Jakob Nicolaus Foerster and Tim Rockt{\"a}schel and Roberta Raileanu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FCsEvaMorw}
} | As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to adversarial attacks is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, or require extensive human annotations. To address these limitations, we present Rainbow Teaming, a novel black-box approach for producing a diverse collection of adversarial prompts. Rainbow Teaming casts adversarial prompt generation as a quality-diversity problem and uses open-ended search to generate prompts that are both effective and diverse. Focusing on the safety domain, we use Rainbow Teaming to target various state-of-the-art LLMs, including the Llama 2 and Llama 3 models. Our approach reveals hundreds of effective adversarial prompts, with an attack success rate exceeding 90% across all tested models. Furthermore, we demonstrate that prompts generated by Rainbow Teaming are highly transferable and that fine-tuning models with synthetic data generated by our method significantly enhances their safety without sacrificing general performance or helpfulness. We additionally explore the versatility of Rainbow Teaming by applying it to question answering and cybersecurity, showcasing its potential to drive robust open-ended self-improvement in a wide range of applications. | Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts | [
"Mikayel Samvelyan",
"Sharath Chandra Raparthy",
"Andrei Lupu",
"Eric Hambro",
"Aram H. Markosyan",
"Manish Bhatt",
"Yuning Mao",
"Minqi Jiang",
"Jack Parker-Holder",
"Jakob Nicolaus Foerster",
"Tim Rocktäschel",
"Roberta Raileanu"
] | NeurIPS.cc/2024/Conference | 2402.16822 | [
""
] | https://huggingface.co/papers/2402.16822 | 7 | 15 | 0 | 12 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=FBMsBdH0yz | @inproceedings{
yang2024masked,
title={Masked Hard-Attention Transformers Recognize Exactly the Star-Free Languages},
author={Andy Yang and David Chiang and Dana Angluin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FBMsBdH0yz}
} | The expressive power of transformers over inputs of unbounded size can be studied through their ability to recognize classes of formal languages. In this paper, we establish exact characterizations of transformers with hard attention (in which all attention is focused on exactly one position) and attention masking (in which each position only attends to positions on one side). With strict masking (each position cannot attend to itself) and without position embeddings, these transformers are expressively equivalent to linear temporal logic (LTL), which defines exactly the star-free languages. A key technique is the use of Boolean RASP as a convenient intermediate language between transformers and LTL. We then take numerous results known for LTL and apply them to transformers, showing how position embeddings, strict masking, and depth all increase expressive power. | Masked Hard-Attention Transformers Recognize Exactly the Star-Free Languages | [
"Andy Yang",
"David Chiang",
"Dana Angluin"
] | NeurIPS.cc/2024/Conference | 2310.13897 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FBLJIfW64D | @inproceedings{
defilippis2024dimensionfree,
title={Dimension-free deterministic equivalents and scaling laws for random feature regression},
author={Leonardo Defilippis and Bruno Loureiro and Theodor Misiakiewicz},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FBLJIfW64D}
} | In this work we investigate the generalization performance of random feature ridge regression (RFRR). Our main contribution is a general deterministic equivalent for the test error of RFRR. Specifically, under a certain concentration property, we show that the test error is well approximated by a closed-form expression that only depends on the feature map eigenvalues. Notably, our approximation guarantee is non-asymptotic, multiplicative, and independent of the feature map dimension---allowing for infinite-dimensional features. We expect this deterministic equivalent to hold broadly beyond our theoretical analysis, and we empirically validate its predictions on various real and synthetic datasets. As an application, we derive sharp excess error rates under standard power-law assumptions of the spectrum and target decay. In particular, we provide a tight result for the smallest number of features achieving optimal minimax error rate. | Dimension-free deterministic equivalents and scaling laws for random feature regression | [
"Leonardo Defilippis",
"Bruno Loureiro",
"Theodor Misiakiewicz"
] | NeurIPS.cc/2024/Conference | 2405.15699 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=FAuFpGeLmx | @inproceedings{
li2024segmenting,
title={Segmenting Watermarked Texts From Language Models},
author={Xingchi Li and Guanxun Li and Xianyang Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=FAuFpGeLmx}
} | Watermarking is a technique that involves embedding nearly unnoticeable statistical signals within generated content to help trace its source. This work focuses on a scenario where an untrusted third-party user sends prompts to a trusted language model (LLM) provider, who then generates a text from their LLM with a watermark. This setup makes it possible for a detector to later identify the source of the text if the user publishes it. The user can modify the generated text by substitutions, insertions, or deletions. Our objective is to develop a statistical method to detect if a published text is LLM-generated from the perspective of a detector. We further propose a methodology to segment the published text into watermarked and non-watermarked sub-strings. The proposed approach is built upon randomization tests and change point detection techniques. We demonstrate that our method ensures Type I and Type II error control and can accurately identify watermarked sub-strings by finding the corresponding change point locations. To validate our technique, we apply it to texts generated by several language models with prompts extracted from Google's C4 dataset and obtain encouraging numerical results. We release all code publicly at https://github.com/doccstat/llm-watermark-cpd. | Segmenting Watermarked Texts From Language Models | [
"Xingchi Li",
"Guanxun Li",
"Xianyang Zhang"
] | NeurIPS.cc/2024/Conference | 2410.20670 | [
"https://github.com/doccstat/llm-watermark-cpd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=F9mNL6vR27 | @inproceedings{
hao2024newton,
title={Newton Informed Neural Operator for Computing Multiple Solutions of Nonlinear Partials Differential Equations},
author={Wenrui Hao and Xinliang Liu and Yahong Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F9mNL6vR27}
} | Solving nonlinear partial differential equations (PDEs) with multiple solutions is essential in various fields, including physics, biology, and engineering. However, traditional numerical methods, such as finite element and finite difference methods, often face challenges when dealing with nonlinear solvers, particularly in the presence of multiple solutions. These methods can become computationally expensive, especially when relying on solvers like Newton's method, which may struggle with ill-posedness near bifurcation points.
In this paper, we propose a novel approach, the Newton Informed Neural Operator, which learns the Newton solver for nonlinear PDEs. Our method integrates traditional numerical techniques with the Newton nonlinear solver, efficiently learning the nonlinear mapping at each iteration. This approach allows us to compute multiple solutions in a single learning process while requiring fewer supervised data points than existing neural network methods. | Newton Informed Neural Operator for Computing Multiple Solutions of Nonlinear Partials Differential Equations | [
"Wenrui Hao",
"Xinliang Liu",
"Yahong Yang"
] | NeurIPS.cc/2024/Conference | 2405.14096 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=F9i1avQTla | @inproceedings{
chen2024samguided,
title={{SAM}-Guided Masked Token Prediction for 3D Scene Understanding},
author={Zhimin Chen and Liang Yang and Yingwei Li and Longlong Jing and Bing Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F9i1avQTla}
} | Foundation models have significantly enhanced 2D task performance, and recent works like Bridge3D have successfully applied these models to improve 3D scene understanding through knowledge distillation, marking considerable advancements. Nonetheless, challenges such as the misalignment between 2D and 3D representations and the persistent long-tail distribution in 3D datasets still restrict the effectiveness of knowledge distillation from 2D to 3D using foundation models. To tackle these issues, we introduce a novel SAM-guided tokenization method that seamlessly aligns 3D transformer structures with region-level knowledge distillation, replacing the traditional KNN-based tokenization techniques. Additionally, we implement a group-balanced re-weighting strategy to effectively address the long-tail problem in knowledge distillation. Furthermore, inspired by the recent success of masked feature prediction, our framework incorporates a two-stage masked token prediction process in which the student model predicts both the global embeddings and token-wise local embeddings derived from the teacher models trained in the first stage. Our methodology has been validated across multiple datasets, including SUN RGB-D, ScanNet, and S3DIS, for tasks like 3D object detection and semantic segmentation. The results demonstrate significant improvements over current state-of-the-art self-supervised methods, establishing new benchmarks in this field. | SAM-Guided Masked Token Prediction for 3D Scene Understanding | [
"Zhimin Chen",
"Liang Yang",
"Yingwei Li",
"Longlong Jing",
"Bing Li"
] | NeurIPS.cc/2024/Conference | 2410.12158 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=F9NDzHQtOl | @inproceedings{
chen2024accelerating,
title={Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity},
author={Haoxuan Chen and Yinuo Ren and Lexing Ying and Grant M. Rotskoff},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F9NDzHQtOl}
} | Diffusion models have become a leading method for generative modeling of both image and scientific data.
As these models are costly to train and \emph{evaluate}, reducing the inference cost for diffusion models remains a major goal.
Inspired by the recent empirical success in accelerating diffusion models via the parallel sampling technique~\cite{shih2024parallel}, we propose to divide the sampling process into $\mathcal{O}(1)$ blocks with parallelizable Picard iterations within each block. Rigorous theoretical analysis reveals that our algorithm achieves $\widetilde{\mathcal{O}}(\mathrm{poly} \log d)$ overall time complexity, marking \emph{the first implementation with provable sub-linear complexity w.r.t. the data dimension $d$}. Our analysis is based on a generalized version of Girsanov's theorem and is compatible with both the SDE and probability flow ODE implementations. Our results shed light on the potential of fast and efficient sampling of high-dimensional data on fast-evolving modern large-memory GPU clusters. | Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity | [
"Haoxuan Chen",
"Yinuo Ren",
"Lexing Ying",
"Grant M. Rotskoff"
] | NeurIPS.cc/2024/Conference | 2405.15986 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=F8wKoSFSaA | @inproceedings{
an2024robust,
title={Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications},
author={Weixin An and Yuanyuan Liu and Fanhua Shang and Hongying Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F8wKoSFSaA}
} | Many zeroth-order (ZO) optimization algorithms have been developed to solve nonconvex minimax problems in machine learning and computer vision areas. However, existing ZO minimax algorithms have high complexity and rely on some strict restrictive conditions for ZO estimations. To address these issues, we design a new unified ZO gradient descent extragradient ascent (ZO-GDEGA) algorithm, which reduces the overall complexity to $\mathcal{O}(d\epsilon^{-6})$ to find an $\epsilon$-stationary point of the function $\psi$ for nonconvex-concave (NC-C) problems, where $d$ is the variable dimension. To the best of our knowledge, ZO-GDEGA is the first ZO algorithm with complexity guarantees to solve stochastic NC-C problems. Moreover, ZO-GDEGA requires weaker conditions on the ZO estimations and achieves more robust theoretical results. As a by-product, ZO-GDEGA has advantages on the condition number for the NC-strongly concave case. Experimentally, ZO-GDEGA can generate more effective poisoning attack data with an average accuracy reduction of 5\%. The improved AUC performance also verifies the robustness of gradient estimations. | Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications | [
"Weixin An",
"Yuanyuan Liu",
"Fanhua Shang",
"Hongying Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=F8aSOovlEP | @inproceedings{
chen2024mecd,
title={{MECD}: Unlocking Multi-Event Causal Discovery in Video Reasoning},
author={Tieyuan Chen and Huabin Liu and Tianyao He and Yihang Chen and Chaofan Gan and Xiao Ma and Cheng Zhong and Yang Zhang and Yingxue Wang and Hui Lin and Weiyao Lin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F8aSOovlEP}
} | Video causal reasoning aims to achieve a high-level understanding of video content from a causal perspective. However, current video reasoning tasks are limited in scope, primarily executed in a question-answering paradigm and focusing on short videos containing only a single event and simple causal relationships, lacking comprehensive and structured causality analysis for videos with multiple events. To fill this gap, we introduce a new task and dataset, Multi-Event Causal Discovery (MECD). It aims to uncover the causal relationships between events distributed chronologically across long videos. Given visual segments and textual descriptions of events, MECD requires identifying the causal associations between these events to derive a comprehensive, structured event-level video causal diagram explaining why and how the final result event occurred. To address MECD, we devise a novel framework inspired by the Granger Causality method, using an efficient mask-based event prediction model to perform an Event Granger Test, which estimates causality by comparing the predicted result event when premise events are masked versus unmasked. Furthermore, we integrate causal inference techniques such as front-door adjustment and counterfactual inference to address challenges in MECD like causality confounding and illusory causality. Experiments validate the effectiveness of our framework in providing causal relationships in multi-event videos, outperforming GPT-4o and VideoLLaVA by 5.7% and 4.1%, respectively. | MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning | [
"Tieyuan Chen",
"Huabin Liu",
"Tianyao He",
"Yihang Chen",
"Chaofan Gan",
"Xiao Ma",
"Cheng Zhong",
"Yang Zhang",
"Yingxue Wang",
"Hui Lin",
"Weiyao Lin"
] | NeurIPS.cc/2024/Conference | 2409.17647 | [
"https://github.com/tychen-SJTU/MECD-Benchmark"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=F8DWffLkYG | @inproceedings{
reddy2024designing,
title={Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization},
author={Aniketh Janardhan Reddy and Xinyang Geng and Michael H Herschl and Sathvik Kolli and Aviral Kumar and Patrick D Hsu and Sergey Levine and Nilah M Ioannidis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F8DWffLkYG}
} | Gene therapies have the potential to treat disease by delivering therapeutic genetic cargo to disease-associated cells. One limitation to their widespread use is the lack of short regulatory sequences, or promoters, that differentially induce the expression of delivered genetic cargo in target cells, minimizing side effects in other cell types. Such cell-type-specific promoters are difficult to discover using existing methods, requiring either manual curation or access to large datasets of promoter-driven expression from both targeted and untargeted cells. Model-based optimization (MBO) has emerged as an effective method to design biological sequences in an automated manner, and has recently been used in promoter design methods. However, these methods have only been tested using large training datasets that are expensive to collect, and focus on designing promoters for markedly different cell types, overlooking the complexities associated with designing promoters for closely related cell types that share similar regulatory features. Therefore, we introduce a comprehensive framework for utilizing MBO to design promoters in a data-efficient manner, with an emphasis on discovering promoters for similar cell types. We use conservative objective models (COMs) for MBO and highlight practical considerations such as best practices for improving sequence diversity, getting estimates of model uncertainty, and choosing the optimal set of sequences for experimental validation. Using three leukemia cell lines (Jurkat, K562, and THP1), we show that our approach discovers many novel cell-type-specific promoters after experimentally validating the designed sequences. For K562 cells, in particular, we discover a promoter that has 75.85\% higher cell-type-specificity than the best promoter from the initial dataset used to train our models. Our code and data will be available at https://github.com/young-geng/promoter_design. | Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization | [
"Aniketh Janardhan Reddy",
"Xinyang Geng",
"Michael H Herschl",
"Sathvik Kolli",
"Aviral Kumar",
"Patrick D Hsu",
"Sergey Levine",
"Nilah M Ioannidis"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=F7tGQ7b10q | @inproceedings{
gao2024honestllm,
title={Honest{LLM}: Toward an Honest and Helpful Large Language Model},
author={Chujie Gao and Siyuan Wu and Yue Huang and Dongping Chen and Qihui Zhang and Zhengyan Fu and Yao Wan and Lichao Sun and Xiangliang Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F7tGQ7b10q}
} | Large Language Models (LLMs) have achieved remarkable success across various industries and applications, owing to their exceptional generative capabilities. Nevertheless, honesty and helpfulness, which ensure safe and useful real-world deployments, have been considered as the longstanding cornerstones in practice. In this paper, we first established comprehensive principles for honesty LLM and further created the HoneSet with 930 queries across six categories, which is designed to evaluate LLMs’ ability to maintain honesty. Then, we improved the honesty and helpfulness of LLMs in both training-free and fine-tuning settings. Specifically, we propose a training-free method named Curiosity-Driven Prompting, which enables LLMs to express their internal confusion and uncertainty about the given query and then optimize their responses. Moreover, we also propose a two-stage fine-tuning approach, inspired by curriculum learning, to enhance the honesty and helpfulness of LLMs. The method first teaches LLMs to distinguish between honest and dishonest, and then LLMs are trained to learn to respond more helpfully. Experimental results demonstrated that both of the two proposed methods improve the helpfulness of LLMs while making them maintain honesty. Our research has paved the way for more reliable and trustworthy LLMs in real-world applications. | HonestLLM: Toward an Honest and Helpful Large Language Model | [
"Chujie Gao",
"Siyuan Wu",
"Yue Huang",
"Dongping Chen",
"Qihui Zhang",
"Zhengyan Fu",
"Yao Wan",
"Lichao Sun",
"Xiangliang Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=F738WY1Xm4 | @inproceedings{
marion2024deep,
title={Deep linear networks for regression are implicitly regularized towards flat minima},
author={Pierre Marion and L{\'e}na{\"\i}c Chizat},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F738WY1Xm4}
} | The largest eigenvalue of the Hessian, or sharpness, of neural networks is a key quantity to understand their optimization dynamics. In this paper, we study the sharpness of deep linear networks for univariate regression. Minimizers can have arbitrarily large sharpness, but not an arbitrarily small one. Indeed, we show a lower bound on the sharpness of minimizers, which grows linearly with depth. We then study the properties of the minimizer found by gradient flow, which is the limit of gradient descent with vanishing learning rate. We show an implicit regularization towards flat minima: the sharpness of the minimizer is no more than a constant times the lower bound. The constant depends on the condition number of the data covariance matrix, but not on width or depth. This result is proven both for a small-scale initialization and a residual initialization. Results of independent interest are shown in both cases. For small-scale initialization, we show that the learned weight matrices are approximately rank-one and that their singular vectors align. For residual initialization, convergence of the gradient flow for a Gaussian initialization of the residual network is proven. Numerical experiments illustrate our results and connect them to gradient descent with non-vanishing learning rate. | Deep linear networks for regression are implicitly regularized towards flat minima | [
"Pierre Marion",
"Lénaïc Chizat"
] | NeurIPS.cc/2024/Conference | 2405.13456 | [
"https://github.com/pierremarion23/implicit-reg-sharpness"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=F6L23TNlFW | @inproceedings{
lu2024predicting,
title={Predicting Label Distribution from Ternary Labels},
author={Yunan Lu and Xiuyi Jia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=F6L23TNlFW}
} | Label distribution learning is a powerful learning paradigm to deal with label polysemy and has been widely applied in many practical tasks. A significant obstacle to the effective utilization of label distribution is the substantial expenses of accurate quantifying the label distributions. To tackle this challenge, label enhancement methods automatically infer label distributions from more easily accessible multi-label data based on binary annotations. However, the binary annotation of multi-label data requires experts to accurately assess whether each label can describe the instance, which may diminish the annotating efficiency and heighten the risk of erroneous annotation since the relationship between the label and the instance is unclear in many practical scenarios. Therefore, we propose to predict label distribution from ternary labels, allowing experts to annotate labels in a three-way annotation scheme. They can annotate the label as "$0$" indicating "uncertain relevant" if it is difficult to definitively determine whether the label can describe the instance, in addition to the binary annotation of "$1$" indicating "definitely relevant" and "$-1$" indicating "definitely irrelevant". Both the theoretical and methodological studies are conducted for the proposed learning paradigm. In the theoretical part, we conduct a quantitative comparison of approximation error between ternary and binary labels to elucidate the superiority of ternary labels over binary labels. In the methodological part, we propose a Categorical distribution with monotonicity and orderliness to model the mapping from label description degrees to ternary labels, which can serve as a loss function or as a probability distribution, allowing most existing label enhancement methods to be adapted to our task. Finally, we experimentally demonstrate the effectiveness of our proposal. | Predicting Label Distribution from Ternary Labels | [
"Yunan Lu",
"Xiuyi Jia"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Eyyt3ZmNV6 | @inproceedings{
guo2024zeromark,
title={ZeroMark: Towards Dataset Ownership Verification without Disclosing Watermark},
author={Junfeng Guo and Yiming Li and Ruibo Chen and Yihan Wu and Chenxi Liu and Heng Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Eyyt3ZmNV6}
} | High-quality public datasets significantly prompt the prosperity of deep neural networks (DNNs). Currently, dataset ownership verification (DOV), which consists of dataset watermarking and ownership verification, is the only feasible solution to protect their copyright by preventing unauthorized use. In this paper, we revisit existing DOV methods and find that they all mainly focused on the first stage by designing different types of dataset watermarks and directly exploiting watermarked samples as the verification samples for ownership verification. As such, their success relies on an underlying assumption that verification is a \emph{one-time} and \emph{privacy-preserving} process, which does not necessarily hold in practice. To alleviate this problem, we propose \emph{ZeroMark} to conduct ownership verification without disclosing dataset-specified watermarks. Our method is inspired by our empirical and theoretical findings of the intrinsic property of DNNs trained on the watermarked dataset. Specifically, ZeroMark first generates the closest boundary version of given benign samples and calculates their boundary gradients under the label-only black-box setting. After that, it examines whether the given suspicious method has been trained on the protected dataset by performing a hypothesis test, based on the cosine similarity measured on the boundary gradients and the watermark pattern. Extensive experiments on benchmark datasets verify the effectiveness of our ZeroMark and its resistance to potential adaptive attacks. The codes for reproducing our main experiments are publicly available at \href{https://github.com/JunfengGo/ZeroMark.git}{GitHub}. | ZeroMark: Towards Dataset Ownership Verification without Disclosing Watermark | [
"Junfeng Guo",
"Yiming Li",
"Ruibo Chen",
"Yihan Wu",
"Chenxi Liu",
"Heng Huang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ExeIyx6U0Z | @inproceedings{
amaduzzi2024llana,
title={{LL}a{NA}: Large Language and Ne{RF} Assistant},
author={Andrea Amaduzzi and Pierluigi Zama Ramirez and Giuseppe Lisanti and Samuele Salti and Luigi di Stefano},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ExeIyx6U0Z}
} | Multimodal Large Language Models (MLLMs) have demonstrated an excellent understanding of images and 3D data. However, both modalities have shortcomings in holistically capturing the appearance and geometry of objects. Meanwhile, Neural Radiance Fields (NeRFs), which encode information within the weights of a simple Multi-Layer Perceptron (MLP), have emerged as an increasingly widespread modality that simultaneously encodes the geometry and photorealistic appearance of objects. This paper investigates the feasibility and effectiveness of ingesting NeRF into MLLM. We create LLaNA, the first general-purpose NeRF-language
assistant capable of performing new tasks such as NeRF captioning and Q&A. Notably, our method directly processes the weights of the NeRF’s MLP to extract information about the represented objects without the need to render images or materialize 3D data structures. Moreover, we build a dataset of NeRFs with text annotations for various NeRF-language tasks with no human intervention.
Based on this dataset, we develop a benchmark to evaluate the NeRF understanding capability of our method. Results show that processing NeRF weights performs favourably against extracting 2D or 3D representations from NeRFs. | LLaNA: Large Language and NeRF Assistant | [
"Andrea Amaduzzi",
"Pierluigi Zama Ramirez",
"Giuseppe Lisanti",
"Samuele Salti",
"Luigi di Stefano"
] | NeurIPS.cc/2024/Conference | 2406.11840 | [
""
] | https://huggingface.co/papers/2406.11840 | 5 | 17 | 2 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=Ex3rPvEct8 | @inproceedings{
ospanov2024towards,
title={Towards a Scalable Reference-Free Evaluation of Generative Models},
author={Azim Ospanov and Jingwei Zhang and Mohammad Jalali and Xuenan Cao and Andrej Bogdanov and Farzan Farnia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Ex3rPvEct8}
} | While standard evaluation scores for generative models are mostly reference-based, a reference-dependent assessment of generative models could be generally difficult due to the unavailability of applicable reference datasets. Recently, the reference-free entropy scores, VENDI and RKE, have been proposed to evaluate the diversity of generated data. However, estimating these scores from data leads to significant computational costs for large-scale generative models. In this work, we leverage the random Fourier features framework to reduce the metrics' complexity and propose the *Fourier-based Kernel Entropy Approximation (FKEA)* method. We utilize FKEA's approximated eigenspectrum of the kernel matrix to efficiently estimate the mentioned entropy scores. Furthermore, we show the application of FKEA's proxy eigenvectors to reveal the method's identified modes in evaluating the diversity of produced samples. We provide a stochastic implementation of the FKEA assessment algorithm with a complexity $O(n)$ linearly growing with sample size $n$. We extensively evaluate FKEA's numerical performance in application to standard image, text, and video datasets. Our empirical results indicate the method's scalability and interpretability applied to large-scale generative models. The codebase is available at [https://github.com/aziksh-ospanov/FKEA](https://github.com/aziksh-ospanov/FKEA). | Towards a Scalable Reference-Free Evaluation of Generative Models | [
"Azim Ospanov",
"Jingwei Zhang",
"Mohammad Jalali",
"Xuenan Cao",
"Andrej Bogdanov",
"Farzan Farnia"
] | NeurIPS.cc/2024/Conference | 2407.02961 | [
"https://github.com/aziksh-ospanov/fkea"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=EwWpAPzcay | @inproceedings{
hyung2024effective,
title={Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting},
author={Junha Hyung and Susung Hong and Sungwon Hwang and Jaeseong Lee and Jaegul Choo and Jin-Hwa Kim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EwWpAPzcay}
} | 3D reconstruction from multi-view images is one of the fundamental challenges in computer vision and graphics.
Recently, 3D Gaussian Splatting (3DGS) has emerged as a promising technique capable of real-time rendering with high-quality 3D reconstruction. This method utilizes 3D Gaussian representation and tile-based splatting techniques, bypassing the expensive neural field querying. Despite its potential, 3DGS encounters challenges, including needle-like artifacts, suboptimal geometries, and inaccurate normals, due to the Gaussians converging into anisotropic Gaussians with one dominant variance.
We propose using effective rank analysis to examine the shape statistics of 3D Gaussian primitives, and identify the Gaussians indeed converge into needle-like shapes with the effective rank 1. To address this, we introduce effective rank as a regularization, which constrains the structure of the Gaussians. Our new regularization method enhances normal and geometry reconstruction while reducing needle-like artifacts. The approach can be integrated as an add-on module to other 3DGS variants, improving their quality without compromising visual fidelity. | Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting | [
"Junha Hyung",
"Susung Hong",
"Sungwon Hwang",
"Jaeseong Lee",
"Jaegul Choo",
"Jin-Hwa Kim"
] | NeurIPS.cc/2024/Conference | 2406.11672 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Eu80DGuOcs | @inproceedings{
shen2024understanding,
title={Understanding and Improving Training-free Loss-based Diffusion Guidance},
author={Yifei Shen and XINYANG JIANG and Yifan Yang and Yezhen Wang and Dongqi Han and Dongsheng Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Eu80DGuOcs}
} | Adding additional guidance to pretrained diffusion models has become an increasingly popular research area, with extensive applications in computer vision, reinforcement learning, and AI for science. Recently, several studies have proposed training-free loss-based guidance by using off-the-shelf networks pretrained on clean images. This approach enables zero-shot conditional generation for universal control formats, which appears to offer a free lunch in diffusion guidance. In this paper, we aim to develop a deeper understanding of training-free guidance, as well as overcome its limitations. We offer a theoretical analysis that supports training-free guidance from the perspective of optimization, distinguishing it from classifier-based (or classifier-free) guidance. To elucidate their drawbacks, we theoretically demonstrate that training-free guidance is more susceptible to misaligned gradients and exhibits slower convergence rates compared to classifier guidance. We then introduce a collection of techniques designed to overcome the limitations, accompanied by theoretical rationale and empirical evidence. Our experiments in image and motion generation confirm the efficacy of these techniques. | Understanding and Improving Training-free Loss-based Diffusion Guidance | [
"Yifei Shen",
"XINYANG JIANG",
"Yifan Yang",
"Yezhen Wang",
"Dongqi Han",
"Dongsheng Li"
] | NeurIPS.cc/2024/Conference | 2403.12404 | [
"https://github.com/bigknight/understanding-training-free-diffusion-guidance"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Eu0nYM4BPo | @inproceedings{
bedin2024leveraging,
title={Leveraging an {ECG} Beat Diffusion Model for Morphological Reconstruction from Indirect Signals},
author={Lisa Bedin and Gabriel Cardoso and Josselin Duchateau and Remi Dubois and Eric Moulines},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Eu0nYM4BPo}
} | Electrocardiogram (ECG) signals provide essential information about the heart's condition and are widely used for diagnosing cardiovascular diseases. The morphology of a single heartbeat over the available leads is a primary biosignal for monitoring cardiac conditions. However, analyzing heartbeat morphology can be challenging due to noise and artifacts, missing leads, and a lack of annotated data.
Generative models, such as denoising diffusion generative models (DDMs), have proven successful in generating complex data. We introduce $\texttt{BeatDiff}$, a light-weight DDM tailored for the morphology of multiple leads heartbeats.
We then show that many important ECG downstream tasks can be formulated as conditional generation methods in a Bayesian inverse problem framework using $\texttt{BeatDiff}$ as priors. We propose $\texttt{EM-BeatDiff}$, an Expectation-Maximization algorithm, to solve this conditional generation tasks without fine-tuning. We illustrate our results with several tasks, such as removal of ECG noise and artifacts (baseline wander, electrode motion), reconstruction of a 12-lead ECG from a single lead (useful for ECG reconstruction of smartwatch experiments), and unsupervised explainable anomaly detection. Numerical experiments show that the combination of $\texttt{BeatDiff}$ and $\texttt{EM-BeatDiff}$ outperforms SOTA methods for the problems considered in this work. | Leveraging an ECG Beat Diffusion Model for Morphological Reconstruction from Indirect Signals | [
"Lisa Bedin",
"Gabriel Cardoso",
"Josselin Duchateau",
"Remi Dubois",
"Eric Moulines"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Es2Ey2tGmM | @inproceedings{
khalafi2024constrained,
title={Constrained Diffusion Models via Dual Training},
author={Shervin Khalafi and Dongsheng Ding and Alejandro Ribeiro},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Es2Ey2tGmM}
} | Diffusion models have attained prominence for their ability to synthesize a probability distribution for a given dataset via a diffusion process, enabling the generation of new data points with high fidelity. However, diffusion processes are prone to generating samples that reflect biases in a training dataset. To address this issue, we develop constrained diffusion models by imposing diffusion constraints based on desired distributions that are informed by requirements. Specifically, we cast the training of diffusion models under requirements as a constrained distribution optimization problem that aims to reduce the distribution difference between original and generated data while obeying constraints on the distribution of generated data. We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints. To train constrained diffusion models, we develop a dual training algorithm and characterize the optimality of the trained constrained diffusion model. We empirically demonstrate the effectiveness of our constrained models in two constrained generation tasks: (i) we consider a dataset with one or more underrepresented classes where we train the model with constraints to ensure fairly sampling from all classes during inference; (ii) we fine-tune a pre-trained diffusion model to sample from a new dataset while avoiding overfitting. | Constrained Diffusion Models via Dual Training | [
"Shervin Khalafi",
"Dongsheng Ding",
"Alejandro Ribeiro"
] | NeurIPS.cc/2024/Conference | 2408.15094 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=EpusiLXfNd | @inproceedings{
jiao2024d,
title={3D Structure Prediction of Atomic Systems with Flow-based Direct Preference Optimization},
author={Rui Jiao and Xiangzhe Kong and Wenbing Huang and Yang Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EpusiLXfNd}
} | Predicting high-fidelity 3D structures of atomic systems is a fundamental yet challenging problem in scientific domains. While recent work demonstrates the advantage of generative models in this realm, the exploration of different probability paths are still insufficient, and hallucinations during sampling are persistently occurring. To address these pitfalls, we introduce FlowDPO, a novel framework that explores various probability paths with flow matching models and further suppresses hallucinations using Direct Preference Optimization (DPO) for structure generation. Our approach begins with a pre-trained flow matching model to generate multiple candidate structures for each training sample. These structures are then evaluated and ranked based on their distance to the ground truth, resulting in an automatic preference dataset. Using this dataset, we apply DPO to optimize the original model, improving its performance in generating structures closely aligned with the desired reference distribution. As confirmed by our theoretical analysis, such paradigm and objective function are compatible with arbitrary Gaussian paths, exhibiting favorable universality. Extensive experimental results on antibodies and crystals demonstrate substantial benefits of our FlowDPO, highlighting its potential to advance the field of 3D structure prediction with generative models. | 3D Structure Prediction of Atomic Systems with Flow-based Direct Preference Optimization | [
"Rui Jiao",
"Xiangzhe Kong",
"Wenbing Huang",
"Yang Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Eok6HbcSRI | @inproceedings{
choromanski2024fast,
title={Fast Tree-Field Integrators: From Low Displacement Rank to Topological Transformers},
author={Krzysztof Marcin Choromanski and Arijit Sehanobish and Somnath Basu Roy Chowdhury and Han Lin and Kumar Avinava Dubey and Tamas Sarlos and Snigdha Chaturvedi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Eok6HbcSRI}
} | We present a new class of fast polylog-linear algorithms based on the theory of structured matrices (in particular *low displacement rank*) for integrating tensor fields defined on weighted trees. Several applications of the resulting *fast tree-field integrators* (FTFIs) are presented, including: (a) approximation of graph metrics with tree metrics, (b) graph classification, (c) modeling on meshes, and finally (d) *Topological Transformers* (TTs) (Choromanski et al., 2022) for images. For Topological Transformers, we propose new relative position encoding (RPE) masking mechanisms with as few as **three** extra learnable parameters per Transformer layer, leading to **1.0-1.5\%+** accuracy gains. Importantly, most of FTFIs are **exact** methods, thus numerically equivalent to their brute-force counterparts. When applied to graphs with thousands of nodes, those exact algorithms provide **5.7-13x** speedups. We also provide an extensive theoretical analysis of our methods. | Fast Tree-Field Integrators: From Low Displacement Rank to Topological Transformers | [
"Krzysztof Marcin Choromanski",
"Arijit Sehanobish",
"Somnath Basu Roy Chowdhury",
"Han Lin",
"Kumar Avinava Dubey",
"Tamas Sarlos",
"Snigdha Chaturvedi"
] | NeurIPS.cc/2024/Conference | 2406.15881 | [
""
] | https://huggingface.co/papers/2406.15881 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=Ejg4d4FVrs | @inproceedings{
nielsen2024elliptical,
title={Elliptical Attention},
author={Stefan Nielsen and Laziz Abdullaev and Rachel Teo and Tan Minh Nguyen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Ejg4d4FVrs}
} | Pairwise dot-product self-attention is key to the success of transformers that achieve state-of-the-art performance across a variety of applications in language and vision. This dot-product self-attention computes attention weights among the input tokens using Euclidean distance, which makes the model prone to representation collapse and vulnerable to contaminated samples. In this paper, we propose using a Mahalanobis distance metric for computing the attention weights to stretch the underlying feature space in directions of high contextual relevance. In particular, we define a hyper-ellipsoidal neighborhood around each query to increase the attention weights of the tokens lying in the contextually important directions. We term this novel class of attention Elliptical Attention. Our Elliptical Attention provides two benefits: 1) reducing representation collapse and 2) enhancing the model's robustness as the Elliptical Attention pays more attention to contextually relevant information, rather than focusing on some small subset of informative features. We empirically demonstrate the advantages of Elliptical Attention over the baseline dot-product attention and state-of-the-art attention methods on various practical tasks, including object classification, image
segmentation, and language modeling across different data modalities. | Elliptical Attention | [
"Stefan Nielsen",
"Laziz Abdullaev",
"Rachel Teo",
"Tan Minh Nguyen"
] | NeurIPS.cc/2024/Conference | 2406.13770 | [
"https://github.com/stefvk/elliptical-attention"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=EjKNSErSMJ | @inproceedings{
chen2024lastiterate,
title={Last-Iterate Convergence for Generalized Frank-Wolfe in Monotone Variational Inequalities},
author={Zaiwei Chen and Eric Mazumdar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EjKNSErSMJ}
} | We study the convergence behavior of a generalized Frank-Wolfe algorithm in constrained (stochastic) monotone variational inequality (MVI) problems. In recent years, there have been numerous efforts to design algorithms for solving constrained MVI problems due to their connections with optimization, machine learning, and equilibrium computation in games. Most work in this domain has focused on extensions of simultaneous gradient play, with particular emphasis on understanding the convergence properties of extragradient and optimistic gradient methods. In contrast, we examine the performance of an algorithm from another well-known class of optimization algorithms: Frank-Wolfe. We show that a generalized variant of this algorithm achieves a fast $\mathcal{O}(T^{-1/2})$ last-iterate convergence rate in constrained MVI problems. By drawing connections between our generalized Frank-Wolfe algorithm and the well-known smoothed fictitious play (FP) from game theory, we also derive a finite-sample convergence rate for smoothed FP in zero-sum matrix games. Furthermore, we demonstrate that a stochastic variant of the generalized Frank-Wolfe algorithm for MVI problems also converges in a last-iterate sense, albeit at a slower $\mathcal{O}(T^{-1/6})$ convergence rate. | Last-Iterate Convergence for Generalized Frank-Wolfe in Monotone Variational Inequalities | [
"Zaiwei Chen",
"Eric Mazumdar"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EiIelh2t7S | @inproceedings{
xu2024base,
title={Base of Ro{PE} Bounds Context Length},
author={Mingyu Xu and Xin Men and Bingning Wang and Qingyu Zhang and Hongyu Lin and Xianpei Han and weipeng chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EiIelh2t7S}
} | Position embedding is a core component of current Large Language Models (LLMs). Rotary position embedding (RoPE), a technique that encodes the position information with a rotation matrix, has been the de facto choice for position embedding in many LLMs, such as the Llama series. RoPE has been further utilized to extend long context capability, which is roughly based on adjusting the \textit{base} parameter of RoPE to mitigate out-of-distribution (OOD) problems in position embedding. However, in this paper, we find that LLMs may obtain a superficial long-context ability based on the OOD theory. We revisit the role of RoPE in LLMs and propose a novel property of long-term decay, we derive that the \textit{base of RoPE bounds context length}: there is an absolute lower bound for the base value to obtain certain context length capability. Our work reveals the relationship between context length and RoPE base both theoretically and empirically, which may shed light on future long context training. | Base of RoPE Bounds Context Length | [
"Mingyu Xu",
"Xin Men",
"Bingning Wang",
"Qingyu Zhang",
"Hongyu Lin",
"Xianpei Han",
"weipeng chen"
] | NeurIPS.cc/2024/Conference | 2405.14591 | [
""
] | https://huggingface.co/papers/2405.14591 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=Ehsd856Ltb | @inproceedings{
celikkanat2024revisiting,
title={Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning},
author={Abdulkadir Celikkanat and Andres R Masegosa and Thomas Dyhre Nielsen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=Ehsd856Ltb}
} | Obtaining effective representations of DNA sequences is crucial for genome analysis. Metagenomic binning, for instance, relies on genome representations to cluster complex mixtures of DNA fragments from biological samples with the aim of determining their microbial compositions. In this paper, we revisit k-mer-based representations of genomes and provide a theoretical analysis of their use in representation learning. Based on the analysis, we propose a lightweight and scalable model for performing metagenomic binning at the genome read level, relying only on the k-mer compositions of the DNA fragments. We compare the model to recent genome foundation models and demonstrate that while the models are comparable in performance, the proposed model is significantly more effective in terms of scalability, a crucial aspect for performing metagenomic binning of real-world data sets. | Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning | [
"Abdulkadir Celikkanat",
"Andres R Masegosa",
"Thomas Dyhre Nielsen"
] | NeurIPS.cc/2024/Conference | 2411.02125 | [
"https://github.com/abdcelikkanat/revisitingkmers"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=EfpZNpkrm2 | @inproceedings{
chen2024quanta,
title={Quan{TA}: Efficient High-Rank Fine-Tuning of {LLM}s with Quantum-Informed Tensor Adaptation},
author={Zhuo Chen and Rumen Dangovski and Charlotte Loh and Owen M Dugan and Di Luo and Marin Soljacic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EfpZNpkrm2}
} | We propose **Quan**tum-informed **T**ensor **A**daptation (**QuanTA**), a novel, easy-to-implement, fine-tuning method with no inference overhead for large-scale pre-trained language models. By leveraging quantum-inspired methods derived from quantum circuit structures, QuanTA enables efficient *high-rank* fine-tuning, surpassing the limitations of Low-Rank Adaptation (LoRA)---low-rank approximation may fail for complicated downstream tasks. Our approach is theoretically supported by the universality theorem and the rank representation theorem to achieve efficient high-rank adaptations. Experiments demonstrate that QuanTA significantly enhances commonsense reasoning, arithmetic reasoning, and scalability compared to traditional methods. Furthermore, QuanTA shows superior performance with fewer trainable parameters compared to other approaches and can be designed to integrate with existing fine-tuning algorithms for further improvement, providing a scalable and efficient solution for fine-tuning large language models and advancing state-of-the-art in natural language processing. | QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation | [
"Zhuo Chen",
"Rumen Dangovski",
"Charlotte Loh",
"Owen M Dugan",
"Di Luo",
"Marin Soljacic"
] | NeurIPS.cc/2024/Conference | 2406.00132 | [
"https://github.com/quanta-fine-tuning/quanta"
] | https://huggingface.co/papers/2406.00132 | 0 | 6 | 1 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=EehS4erXWB | @inproceedings{
wang2024sebiequivariant,
title={{SE}(3)-bi-equivariant Transformers for Point Cloud Assembly},
author={Ziming Wang and Rebecka J{\"o}rnsten},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EehS4erXWB}
} | Given a pair of point clouds, the goal of assembly is to recover a rigid transformation that aligns one point cloud to the other. This task is challenging because the point clouds may be non-overlapped, and they may have arbitrary initial positions. To address these difficulties, we propose a method, called $SE(3)$-bi-equivariant transformer (BITR), based on the $SE(3)$-bi-equivariance prior of the task:it guarantees that when the inputs are rigidly perturbed, the output will transform accordingly. Due to its equivariance property, BITR can not only handle non-overlapped PCs, but also guarantee robustness against initial positions. Specifically, BITR first extracts features of the inputs using a novel $SE(3) \times SE(3)$-transformer, and then projects the learned feature to group $SE(3)$ as the output. Moreover, we theoretically show that swap and scale equivariances can be incorporated into BITR, thus it further guarantees stable performance under scaling and swapping the inputs. We experimentally show the effectiveness of BITR in practical tasks. | SE(3)-bi-equivariant Transformers for Point Cloud Assembly | [
"Ziming Wang",
"Rebecka Jörnsten"
] | NeurIPS.cc/2024/Conference | 2407.09167 | [
"https://github.com/wzm2256/bitr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=EeXcOYf3Lg | @inproceedings{
sun2024shmt,
title={{SHMT}: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models},
author={Zhaoyang Sun and Shengwu Xiong and Yaxiong Chen and Fei Du and Weihua Chen and Fan Wang and Yi Rong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EeXcOYf3Lg}
} | This paper studies the challenging task of makeup transfer, which aims to apply diverse makeup styles precisely and naturally to a given facial image. Due to the absence of paired data, current methods typically synthesize sub-optimal pseudo ground truths to guide the model training, resulting in low makeup fidelity. Additionally, different makeup styles generally have varying effects on the person face, but existing methods struggle to deal with this diversity. To address these issues, we propose a novel Self-supervised Hierarchical Makeup Transfer (SHMT) method via latent diffusion models. Following a "decoupling-and-reconstruction" paradigm, SHMT works in a self-supervised manner, freeing itself from the misguidance of imprecise pseudo-paired data. Furthermore, to accommodate a variety of makeup styles, hierarchical texture details are decomposed via a Laplacian pyramid and selectively introduced to the content representation. Finally, we design a novel Iterative Dual Alignment (IDA) module that dynamically adjusts the injection condition of the diffusion model, allowing the alignment errors caused by the domain gap between content and makeup representations to be corrected. Extensive quantitative and qualitative analyses demonstrate the effectiveness of our method. Our code is available at https://github.com/Snowfallingplum/SHMT. | SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models | [
"Zhaoyang Sun",
"Shengwu Xiong",
"Yaxiong Chen",
"Fei Du",
"Weihua Chen",
"Fan Wang",
"Yi Rong"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EdXW71LvKE | @inproceedings{
kim2024crtfusion,
title={{CRT}-Fusion: Camera, Radar, Temporal Fusion Using Motion Information for 3D Object Detection},
author={Jisong Kim and Minjae Seong and Jun Won Choi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EdXW71LvKE}
} | Accurate and robust 3D object detection is a critical component in autonomous vehicles and robotics. While recent radar-camera fusion methods have made significant progress by fusing information in the bird's-eye view (BEV) representation, they often struggle to effectively capture the motion of dynamic objects, leading to limited performance in real-world scenarios. In this paper, we introduce CRT-Fusion, a novel framework that integrates temporal information into radar-camera fusion to address this challenge. Our approach comprises three key modules: Multi-View Fusion (MVF), Motion Feature Estimator (MFE), and Motion Guided Temporal Fusion (MGTF). The MVF module fuses radar and image features within both the camera view and bird's-eye view, thereby generating a more precise unified BEV representation. The MFE module conducts two simultaneous tasks: estimation of pixel-wise velocity information and BEV segmentation. Based on the velocity and the occupancy score map obtained from the MFE module, the MGTF module aligns and fuses feature maps across multiple timestamps in a recurrent manner. By considering the motion of dynamic objects, CRT-Fusion can produce robust BEV feature maps, thereby improving detection accuracy and robustness. Extensive evaluations on the challenging nuScenes dataset demonstrate that CRT-Fusion achieves state-of-the-art performance for radar-camera-based 3D object detection. Our approach outperforms the previous best method in terms of NDS by +1.7%, while also surpassing the leading approach in mAP by +1.4%. These significant improvements in both metrics showcase the effectiveness of our proposed fusion strategy in enhancing the reliability and accuracy of 3D object detection. | CRT-Fusion: Camera, Radar, Temporal Fusion Using Motion Information for 3D Object Detection | [
"Jisong Kim",
"Minjae Seong",
"Jun Won Choi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EdG59dnOzN | @inproceedings{
choudhury2024remove,
title={Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad},
author={Sayantan Choudhury and Nazarii Tupitsa and Nicolas Loizou and Samuel Horv{\'a}th and Martin Tak{\'a}{\v{c}} and Eduard Gorbunov},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EdG59dnOzN}
} | Adaptive methods are extremely popular in machine learning as they make learning rate tuning less expensive. This paper introduces a novel optimization algorithm named KATE, which presents a scale-invariant adaptation of the well-known AdaGrad algorithm. We prove the scale-invariance of KATE for the case of Generalized Linear Models. Moreover, for general smooth non-convex problems, we establish a convergence rate of $O((\log T)/\sqrt{T})$ for KATE, matching the best-known ones for AdaGrad and Adam. We also compare KATE to other state-of-the-art adaptive algorithms Adam and AdaGrad in numerical experiments with different problems, including complex machine learning tasks like image classification and text classification on real data. The results indicate that KATE consistently outperforms AdaGrad and matches/surpasses the performance of Adam in all considered scenarios. | Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad | [
"Sayantan Choudhury",
"Nazarii Tupitsa",
"Nicolas Loizou",
"Samuel Horváth",
"Martin Takáč",
"Eduard Gorbunov"
] | NeurIPS.cc/2024/Conference | 2403.02648 | [
"https://github.com/nazya/kate"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=EbSSBvwUWw | @inproceedings{
kou2024matching,
title={Matching the Statistical Query Lower Bound for \$k\$-Sparse Parity Problems with Sign Stochastic Gradient Descent},
author={Yiwen Kou and Zixiang Chen and Quanquan Gu and Sham M. Kakade},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EbSSBvwUWw}
} | The $k$-sparse parity problem is a classical problem in computational complexity and algorithmic theory, serving as a key benchmark for understanding computational classes. In this paper, we solve the $k$-sparse parity problem with sign stochastic gradient descent, a variant of stochastic gradient descent (SGD) on two-layer fully-connected neural networks. We demonstrate that this approach can efficiently solve the $k$-sparse parity problem on a $d$-dimensional hypercube ($k\le O(\sqrt{d})$) with a sample complexity of $\tilde{O}(d^{k-1})$ using $2^{\Theta(k)}$ neurons, matching the established $\Omega(d^{k})$ lower bounds of Statistical Query (SQ) models. Our theoretical analysis begins by constructing a good neural network capable of correctly solving the $k$-parity problem. We then demonstrate how a trained neural network with sign SGD can effectively approximate this good network, solving the $k$-parity problem with small statistical errors. To the best of our knowledge, this is the first result that matches the SQ lower bound for solving $k$-sparse parity problem using gradient-based methods. | Matching the Statistical Query Lower Bound for k-Sparse Parity Problems with Sign Stochastic Gradient Descent | [
"Yiwen Kou",
"Zixiang Chen",
"Quanquan Gu",
"Sham M. Kakade"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EZpKBC1ohS | @inproceedings{
fang2024kernel,
title={Kernel {PCA} for Out-of-Distribution Detection},
author={Kun Fang and Qinghua Tao and Kexin Lv and Mingzhen He and Xiaolin Huang and JIE YANG},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=EZpKBC1ohS}
} | Out-of-Distribution (OoD) detection is vital for the reliability of Deep Neural Networks (DNNs).
Existing works have shown the insufficiency of Principal Component Analysis (PCA) straightforwardly applied on the features of DNNs in detecting OoD data from In-Distribution (InD) data.
The failure of PCA suggests that the network features residing in OoD and InD are not well separated by simply proceeding in a linear subspace, which instead can be resolved through proper non-linear mappings.
In this work, we leverage the framework of Kernel PCA (KPCA) for OoD detection, and seek suitable non-linear kernels that advocate the separability between InD and OoD data in the subspace spanned by the principal components.
Besides, explicit feature mappings induced from the devoted task-specific kernels are adopted so that the KPCA reconstruction error for new test samples can be efficiently obtained with large-scale data.
Extensive theoretical and empirical results on multiple OoD data sets and network structures verify the superiority of our KPCA detector in efficiency and efficacy with state-of-the-art detection performance. | Kernel PCA for Out-of-Distribution Detection | [
"Kun Fang",
"Qinghua Tao",
"Kexin Lv",
"Mingzhen He",
"Xiaolin Huang",
"JIE YANG"
] | NeurIPS.cc/2024/Conference | 2402.02949 | [
"https://github.com/fanghenshaometeor/ood-kernel-pca"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.