bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
listlengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
listlengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
listlengths 0
100
| Datasets
listlengths 0
11
| Spaces
listlengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
https://openreview.net/forum?id=ePkLqJh5kw
|
@inproceedings{
zhou2023combating,
title={Combating Bilateral Edge Noise for Robust Link Prediction},
author={Zhanke Zhou and Jiangchao Yao and Jiaxu Liu and Xiawei Guo and quanming yao and LI He and Liang Wang and Bo Zheng and Bo Han},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ePkLqJh5kw}
}
|
Although link prediction on graphs has achieved great success with the development of graph neural networks (GNNs), the potential robustness under the edge noise is still less investigated. To close this gap, we first conduct an empirical study to disclose that the edge noise bilaterally perturbs both input topology and target label, yielding severe performance degradation and representation collapse. To address this dilemma, we propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse. Different from the basic information bottleneck, RGIB further decouples and balances the mutual dependence among graph topology, target labels, and representation, building new learning objectives for robust representation against the bilateral noise. Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies, i.e., self-supervised learning and data reparameterization, for implicit and explicit data denoising, respectively. Extensive experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations. The code is publicly available at: https://github.com/tmlr-group/RGIB.
|
Combating Bilateral Edge Noise for Robust Link Prediction
|
[
"Zhanke Zhou",
"Jiangchao Yao",
"Jiaxu Liu",
"Xiawei Guo",
"quanming yao",
"LI He",
"Liang Wang",
"Bo Zheng",
"Bo Han"
] |
Conference
|
poster
|
2311.01196
|
[
"https://github.com/tmlr-group/rgib"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eP6cDDwBNC
|
@inproceedings{
seedat2023triage,
title={{TRIAGE}: Characterizing and auditing training data for improved regression},
author={Nabeel Seedat and Jonathan Crabb{\'e} and Zhaozhi Qian and Mihaela van der Schaar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eP6cDDwBNC}
}
|
Data quality is crucial for robust machine learning algorithms, with the recent interest in data-centric AI emphasizing the importance of training data characterization. However, current data characterization methods are largely focused on classification settings, with regression settings largely understudied. To address this, we introduce TRIAGE, a novel data characterization framework tailored to regression tasks and compatible with a broad class of regressors. TRIAGE utilizes conformal predictive distributions to provide a model-agnostic scoring method, the TRIAGE score. We operationalize the score to analyze individual samples' training dynamics and characterize samples as under-, over-, or well-estimated by the model. We show that TRIAGE's characterization is consistent and highlight its utility to improve performance via data sculpting/filtering, in multiple regression settings. Additionally, beyond sample level, we show TRIAGE enables new approaches to dataset selection and feature acquisition. Overall, TRIAGE highlights the value unlocked by data characterization in real-world regression applications.
|
TRIAGE: Characterizing and auditing training data for improved regression
|
[
"Nabeel Seedat",
"Jonathan Crabbé",
"Zhaozhi Qian",
"Mihaela van der Schaar"
] |
Conference
|
poster
|
2310.18970
|
[
"https://github.com/vanderschaarlab/triage"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eNhW9UnlGG
|
@inproceedings{
zhang2023contextual,
title={Contextual Gaussian Process Bandits with Neural Networks},
author={Haoting Zhang and Jinghai He and Rhonda Righter and Zuo-Jun Shen and Zeyu Zheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eNhW9UnlGG}
}
|
Contextual decision-making problems have witnessed extensive applications in various fields such as online content recommendation, personalized healthcare, and autonomous vehicles, where a core practical challenge is to select a suitable surrogate model for capturing unknown complicated reward functions. It is often the case that both high approximation accuracy and explicit uncertainty quantification are desired. In this work, we propose a neural network-accompanied Gaussian process (NN-AGP) model, which leverages neural networks to approximate the unknown and potentially complicated reward function regarding the contextual variable, and maintains a Gaussian process surrogate model with respect to the decision variable. Our model is shown to outperform existing approaches by offering better approximation accuracy thanks to the use of neural networks and possessing explicit uncertainty quantification from the Gaussian process. We also analyze the maximum information gain of the NN-AGP model and prove regret bounds for the corresponding algorithms. Moreover, we conduct experiments on both synthetic and practical problems, illustrating the effectiveness of our approach.
|
Contextual Gaussian Process Bandits with Neural Networks
|
[
"Haoting Zhang",
"Jinghai He",
"Rhonda Righter",
"Zuo-Jun Shen",
"Zeyu Zheng"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=eMR57voMz1
|
@inproceedings{
cho2023diversify,
title={Diversify {\textbackslash}\& Conquer: Outcome-directed Curriculum {RL} via Out-of-Distribution Disagreement},
author={Daesol Cho and Seungjae Lee and H. Jin Kim},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eMR57voMz1}
}
|
Reinforcement learning (RL) often faces the challenges of uninformed search problems where the agent should explore without access to the domain knowledge such as characteristics of the environment or external rewards. To tackle these challenges, this work proposes a new approach for curriculum RL called $\textbf{D}$iversify for $\textbf{D}$isagreement \& $\textbf{C}$onquer ($\textbf{D2C}$). Unlike previous curriculum learning methods, D2C requires only a few examples of desired outcomes and works in any environment, regardless of its geometry or the distribution of the desired outcome examples. The proposed method performs diversification of the goal-conditional classifiers to identify similarities between visited and desired outcome states and ensures that the classifiers disagree on states from out-of-distribution, which enables quantifying the unexplored region and designing an arbitrary goal-conditioned intrinsic reward signal in a simple and intuitive way. The proposed method then employs bipartite matching to define a curriculum learning objective that produces a sequence of well-adjusted intermediate goals, which enable the agent to automatically explore and conquer the unexplored region. We present experimental results demonstrating that D2C outperforms prior curriculum RL methods in both quantitative and qualitative aspects, even with the arbitrarily distributed desired outcome examples.
|
Diversify & Conquer: Outcome-directed Curriculum RL via Out-of-Distribution Disagreement
|
[
"Daesol Cho",
"Seungjae Lee",
"H. Jin Kim"
] |
Conference
|
poster
|
2310.19261
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eLH2NFOO1B
|
@inproceedings{
klein2023equivariant,
title={Equivariant flow matching},
author={Leon Klein and Andreas Kr{\"a}mer and Frank Noe},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eLH2NFOO1B}
}
|
Normalizing flows are a class of deep generative models that are especially interesting for modeling probability distributions in physics, where the exact likelihood of flows allows reweighting to known target energy functions and computing unbiased observables. For instance, Boltzmann generators tackle the long-standing sampling problem in statistical physics by training flows to produce equilibrium samples of many-body systems such as small molecules and proteins. To build effective models for such systems, it is crucial to incorporate the symmetries of the target energy into the model, which can be achieved by equivariant continuous normalizing flows (CNFs). However, CNFs can be computationally expensive to train and generate samples from, which has hampered their scalability and practical application.
In this paper, we introduce equivariant flow matching, a new training objective for equivariant CNFs that is based on the recently proposed optimal transport flow matching. Equivariant flow matching exploits the physical symmetries of the target energy for efficient, simulation-free training of equivariant CNFs.
We demonstrate the effectiveness of flow matching on rotation and permutation invariant many-particle systems and a small molecule, alanine dipeptide, where for the first time we obtain a Boltzmann generator with significant sampling efficiency without relying on tailored internal coordinate featurization. Our results show that the equivariant flow matching objective yields flows with shorter integration paths, improved sampling efficiency, and higher scalability compared to existing methods.
|
Equivariant flow matching
|
[
"Leon Klein",
"Andreas Krämer",
"Frank Noe"
] |
Conference
|
poster
|
2306.15030
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eKFrXWb0sT
|
@inproceedings{
he2023frequencyenhanced,
title={Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation},
author={Keji He and Chenyang Si and Zhihe Lu and Yan Huang and Liang Wang and Xinchao Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eKFrXWb0sT}
}
|
Vision-and-Language Navigation (VLN) is a challenging task that requires an agent to navigate through complex environments based on natural language instructions. In contrast to conventional approaches, which primarily focus on the spatial domain exploration, we propose a paradigm shift toward the Fourier domain. This alternative perspective aims to enhance visual-textual matching, ultimately improving the agent's ability to understand and execute navigation tasks based on the given instructions. In this study, we first explore the significance of high-frequency information in VLN and provide evidence that it is instrumental in bolstering visual-textual matching processes. Building upon this insight, we further propose a sophisticated and versatile Frequency-enhanced Data Augmentation (FDA) technique to improve the VLN model's capability of capturing critical high-frequency information. Specifically, this approach requires the agent to navigate in environments where only a subset of high-frequency visual information corresponds with the provided textual instructions, ultimately fostering the agent's ability to selectively discern and capture pertinent high-frequency features according to the given instructions. Promising results on R2R, RxR, CVDN and REVERIE demonstrate that our FDA can be readily integrated with existing VLN approaches, improving performance without adding extra parameters, and keeping models simple and efficient. The code is available at https://github.com/hekj/FDA.
|
Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation
|
[
"Keji He",
"Chenyang Si",
"Zhihe Lu",
"Yan Huang",
"Liang Wang",
"Xinchao Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=eJZ5vJEaaa
|
@inproceedings{
mao2023what,
title={What Planning Problems Can A Relational Neural Network Solve?},
author={Jiayuan Mao and Tom{\'a}s Lozano-P{\'e}rez and Joshua B. Tenenbaum and Leslie Pack Kaelbling},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eJZ5vJEaaa}
}
|
Goal-conditioned policies are generally understood to be "feed-forward" circuits, in the form of neural networks that map from the current state and the goal specification to the next action to take. However, under what circumstances such a policy can be learned and how efficient the policy will be are not well understood. In this paper, we present a circuit complexity analysis for relational neural networks (such as graph neural networks and transformers) representing policies for planning problems, by drawing connections with serialized goal regression search (S-GRS). We show that there are three general classes of planning problems, in terms of the growth of circuit width and depth as a function of the number of objects and planning horizon, providing constructive proofs. We also illustrate the utility of this analysis for designing neural networks for policy learning.
|
What Planning Problems Can A Relational Neural Network Solve?
|
[
"Jiayuan Mao",
"Tomás Lozano-Pérez",
"Joshua B. Tenenbaum",
"Leslie Pack Kaelbling"
] |
Conference
|
spotlight
|
2312.03682
|
[
"https://github.com/concepts-ai/goal-regression-width"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eIFZtkshgH
|
@inproceedings{
yuan2023adpt,
title={{AD}-{PT}: Autonomous Driving Pre-Training with Large-scale Point Cloud Dataset},
author={Jiakang Yuan and Bo Zhang and Xiangchao Yan and Botian Shi and Tao Chen and Yikang LI and Yu Qiao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eIFZtkshgH}
}
|
It is a long-term vision for Autonomous Driving (AD) community that the perception models can learn from a large-scale point cloud dataset, to obtain unified representations that can achieve promising results on different tasks or benchmarks. Previous works mainly focus on the self-supervised pre-training pipeline, meaning that they perform the pre-training and fine-tuning on the same benchmark, which is difficult to attain the performance scalability and cross-dataset application for the pre-training checkpoint. In this paper, for the first time, we are committed to building a large-scale pre-training point-cloud dataset with diverse data distribution, and meanwhile learning generalizable representations from such a diverse pre-training dataset. We formulate the point-cloud pre-training task as a semi-supervised problem, which leverages the few-shot labeled and massive unlabeled point-cloud data to generate the unified backbone representations that can be directly applied to many baseline models and benchmarks, decoupling the AD-related pre-training process and downstream fine-tuning task. During the period of backbone pre-training, by enhancing the scene- and instance-level distribution diversity and exploiting the backbone's ability to learn from unknown instances, we achieve significant performance gains on a series of downstream perception benchmarks including Waymo, nuScenes, and KITTI, under different baseline models like PV-RCNN++, SECOND, CenterPoint.
|
AD-PT: Autonomous Driving Pre-Training with Large-scale Point Cloud Dataset
|
[
"Jiakang Yuan",
"Bo Zhang",
"Xiangchao Yan",
"Botian Shi",
"Tao Chen",
"Yikang LI",
"Yu Qiao"
] |
Conference
|
poster
|
2306.00612
|
[
"https://github.com/pjlab-adg/3dtrans"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eGoE9CVRPc
|
@inproceedings{
du2023rgmil,
title={{RGMIL}: Guide Your Multiple-Instance Learning Model with Regressor},
author={Zhaolong Du and Shasha Mao and Yimeng Zhang and Shuiping Gou and Licheng Jiao and Lin Xiong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eGoE9CVRPc}
}
|
In video analysis, an important challenge is insufficient annotated data due to the rare occurrence of the critical patterns, and we need to provide discriminative frame-level representation with limited annotation in some applications. Multiple Instance Learning (MIL) is suitable for this scenario. However, many MIL models paid attention to analyzing the relationships between instance representations and aggregating them, but neglecting the critical information from the MIL problem itself, which causes difficultly achieving ideal instance-level performance compared with the supervised model.
To address this issue, we propose the $\textbf{\textit{Regressor-Guided MIL network} (RGMIL)}$, which effectively produces discriminative instance-level representations in a general multi-classification scenario. In the proposed method, we make full use of the $\textit{regressor}$ through our newly introduced $\textit{aggregator}$, $\textbf{\textit{Regressor-Guided Pooling} (RGP)}$. RGP focuses on simulating the correct inference process of humans while facing similar problems without introducing new parameters, and the MIL problem can be accurately described through the critical information from the $\textit{regressor}$ in our method.
In experiments, RGP shows dominance on more than 20 MIL benchmark datasets, with the average bag-level classification accuracy close to 1.
We also perform a series of comprehensive experiments on the MMNIST dataset. Experimental results illustrate that our $\textit{aggregator}$ outperforms existing methods under different challenging circumstances. Instance-level predictions are even possible under the guidance of RGP information table in a long sequence. RGMIL also presents comparable instance-level performance with S-O-T-A supervised models in complicated applications. Statistical results demonstrate the assumption that a MIL model can compete with a supervised model at the instance level, as long as a structure that accurately describes the MIL problem is provided. The codes are available on $\url{https://github.com/LMBDA-design/RGMIL}$.
|
RGMIL: Guide Your Multiple-Instance Learning Model with Regressor
|
[
"Zhaolong Du",
"Shasha Mao",
"Yimeng Zhang",
"Shuiping Gou",
"Licheng Jiao",
"Lin Xiong"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=eE5L1RkxW0
|
@inproceedings{
lin2023revisiting,
title={Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View},
author={Zhiyu Lin and Yifei Gao and Yunfan Yang and Jitao Sang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eE5L1RkxW0}
}
|
A widely discussed hypothesis regarding the cause of visual models' lack of robustness is that they can exploit human-imperceptible high-frequency components (HFC) in images, which in turn leads to model vulnerabilities, such as the adversarial examples. However, (1) inconsistent findings regarding the validation of this hypothesis reflect in a limited understanding of HFC, and (2) solutions inspired by the hypothesis tend to involve a robustness-accuracy trade-off and leaning towards suppressing the model's learning on HFC. In this paper, inspired by the long-tailed characteristic observed in frequency spectrum, we first formally define the HFC from long-tailed perspective and then revisit the relationship between HFC and model robustness. In the frequency long-tailed scenario, experimental results on common datasets and various network structures consistently indicate that models in standard training exhibit high sensitivity to HFC. We investigate the reason of the sensitivity, which reflects in model's under-fitting behavior on HFC. Furthermore, the cause of the model's under-fitting behavior is attributed to the limited information content in HFC. Based on these findings, we propose a Balance Spectrum Sampling (BaSS) strategy, which effectively counteracts the long-tailed effect and enhances the model's learning on HFC. Extensive experimental results demonstrate that our method achieves a substantially better robustness-accuracy trade-off when combined with existing defense methods, while also indicating the potential of encouraging HFC learning in improving model performance.
|
Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View
|
[
"Zhiyu Lin",
"Yifei Gao",
"Yunfan Yang",
"Jitao Sang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=eDDZh8C4W4
|
@inproceedings{
hacohen2023how,
title={How to Select Which Active Learning Strategy is Best Suited for Your Specific Problem and Budget},
author={Guy Hacohen and Daphna Weinshall},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eDDZh8C4W4}
}
|
In the domain of Active Learning (AL), a learner actively selects which unlabeled examples to seek labels from an oracle, while operating within predefined budget constraints. Importantly, it has been recently shown that distinct query strategies are better suited for different conditions and budgetary constraints. In practice, the determination of the most appropriate AL strategy for a given situation remains an open problem. To tackle this challenge, we propose a practical derivative-based method that dynamically identifies the best strategy for a given budget. Intuitive motivation for our approach is provided by the theoretical analysis of a simplified scenario. We then introduce a method to dynamically select an AL strategy, which takes into account the unique characteristics of the problem and the available budget. Empirical results showcase the effectiveness of our approach across diverse budgets and computer vision tasks.
|
How to Select Which Active Learning Strategy is Best Suited for Your Specific Problem and Budget
|
[
"Guy Hacohen",
"Daphna Weinshall"
] |
Conference
|
poster
|
2306.03543
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eD534mPhAg
|
@inproceedings{
fang2023evaluating,
title={Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis},
author={Junfeng Fang and Wei Liu and Yuan Gao and Zemin Liu and An Zhang and Xiang Wang and Xiangnan He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eD534mPhAg}
}
|
This work studies the evaluation of explaining graph neural networks (GNNs), which is crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation metrics, and even explanation methods -- which mainly follow the paradigm of feeding the explanatory subgraph and measuring output difference -- always suffer from the notorious out-of-distribution (OOD) issue. In this work, we endeavor to confront the issue by introducing a novel evaluation metric, termed **O**OD-resistant **A**dversarial **R**obustness (OAR). Specifically, we draw inspiration from the notion of adversarial robustness and evaluate post-hoc explanation subgraphs by calculating their robustness under attack. On top of that, an elaborate OOD reweighting block is inserted into the pipeline to confine the evaluation process to the original data distribution. For applications involving large datasets, we further devise a **Sim**plified version of **OAR** (SimOAR), which achieves a significant improvement in computational efficiency at the cost of a small amount of performance. Extensive empirical studies validate the effectiveness of our OAR and SimOAR.
|
Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis
|
[
"Junfeng Fang",
"Wei Liu",
"Yuan Gao",
"Zemin Liu",
"An Zhang",
"Xiang Wang",
"Xiangnan He"
] |
Conference
|
oral
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=eCgWNU2Imw
|
@inproceedings{
hu2023on,
title={On Sparse Modern Hopfield Model},
author={Jerry Yao-Chieh Hu and Donglin Yang and Dennis Wu and Chenwei Xu and Bo-Yu Chen and Han Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eCgWNU2Imw}
}
|
We introduce the sparse modern Hopfield model as a sparse extension of the modern Hopfield model.
Like its dense counterpart, the sparse modern Hopfield model equips a memory-retrieval dynamics whose one-step approximation corresponds to the sparse attention mechanism.
Theoretically, our key contribution is a principled derivation of a closed-form sparse Hopfield energy using the convex conjugate of the sparse entropic regularizer.
Building upon this, we derive the sparse memory retrieval dynamics from the sparse energy function and show its one-step approximation is equivalent to the sparse-structured attention.
Importantly, we provide a sparsity-dependent memory retrieval error bound which is provably tighter than its dense analog.
The conditions for the benefits of sparsity to arise are therefore identified and discussed.
In addition, we show that the sparse modern Hopfield model maintains the robust theoretical properties of its dense counterpart, including rapid fixed point convergence and exponential memory capacity.
Empirically, we use both synthetic and real-world datasets to demonstrate that the sparse Hopfield model outperforms its dense counterpart in many situations.
|
On Sparse Modern Hopfield Model
|
[
"Jerry Yao-Chieh Hu",
"Donglin Yang",
"Dennis Wu",
"Chenwei Xu",
"Bo-Yu Chen",
"Han Liu"
] |
Conference
|
poster
|
2309.12673
|
[
"https://github.com/magics-lab/sparsemodernhopfield"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=eBXM62SqKY
|
@inproceedings{
vobeck{\'y}2023popd,
title={{POP}-3D: Open-Vocabulary 3D Occupancy Prediction from Images},
author={Anton{\'\i}n Vobeck{\'y} and Oriane Sim{\'e}oni and David Hurych and Spyros Gidaris and Andrei Bursuc and Patrick Perez and Josef Sivic},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eBXM62SqKY}
}
|
We describe an approach to predict open-vocabulary 3D semantic voxel occupancy map from input 2D images with the objective of enabling 3D grounding, segmentation and retrieval of free-form language queries. This is a challenging problem because of the 2D-3D ambiguity and the open-vocabulary nature of the target tasks, where obtaining annotated training data in 3D is difficult. The contributions of this work are three-fold.
First, we design a new model architecture for open-vocabulary 3D semantic occupancy prediction. The architecture consists of a 2D-3D encoder together with occupancy prediction and 3D-language heads. The output is a dense voxel map of 3D grounded language embeddings enabling a range of open-vocabulary tasks.
Second, we develop a tri-modal self-supervised learning algorithm that leverages three modalities: (i) images, (ii) language and (iii) LiDAR point clouds, and enables training the proposed architecture using a strong pre-trained vision-language model without the need for any 3D manual language annotations.
Finally, we demonstrate quantitatively the strengths of the proposed model on several open-vocabulary tasks:
Zero-shot 3D semantic segmentation using existing datasets; 3D grounding and retrieval of free-form language queries, using a small dataset that we propose as an extension of nuScenes. You can find the project page here https://vobecant.github.io/POP3D.
|
POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images
|
[
"Antonín Vobecký",
"Oriane Siméoni",
"David Hurych",
"Spyros Gidaris",
"Andrei Bursuc",
"Patrick Perez",
"Josef Sivic"
] |
Conference
|
poster
|
2401.09413
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e8i7OaPj0q
|
@inproceedings{
bu2023automatic,
title={Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger},
author={Zhiqi Bu and Yu-Xiang Wang and Sheng Zha and George Karypis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e8i7OaPj0q}
}
|
Per-example gradient clipping is a key algorithmic step that enables practical differential private (DP) training for deep learning models. The choice of clipping threshold $R$, however, is vital for achieving high accuracy under DP. We propose an easy-to-use replacement, called automatic clipping, that eliminates the need to tune $R$ for any DP optimizers, including DP-SGD, DP-Adam, DP-LAMB and many others.
The automatic variants are as private and computationally efficient as existing DP optimizers, but require no DP-specific hyperparameters and thus make DP training as amenable as the standard non-private training. We give a rigorous convergence analysis of automatic DP-SGD in the non-convex setting, showing that it can enjoy an asymptotic convergence rate that matches the standard SGD, under a symmetric gradient noise assumption of the per-sample gradients (commonly used in the non-DP literature). We demonstrate on various language and vision tasks that automatic clipping outperforms or matches the state-of-the-art, and can be easily employed with minimal changes to existing codebases.
|
Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger
|
[
"Zhiqi Bu",
"Yu-Xiang Wang",
"Sheng Zha",
"George Karypis"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=e8RZwixcE4
|
@inproceedings{
egami2023using,
title={Using Imperfect Surrogates for Downstream Inference: Design-based Supervised Learning for Social Science Applications of Large Language Models},
author={Naoki Egami and Musashi Hinck and Brandon M. Stewart and Hanying Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e8RZwixcE4}
}
|
In computational social science (CSS), researchers analyze documents to explain social and political phenomena. In most scenarios, CSS researchers first obtain labels for documents and then explain labels using interpretable regression analyses in the second step. One increasingly common way to annotate documents cheaply at scale is through large language models (LLMs). However, like other scalable ways of producing annotations, such surrogate labels are often imperfect and biased. We present a new algorithm for using imperfect annotation surrogates for downstream statistical analyses while guaranteeing statistical properties—like asymptotic unbiasedness and proper uncertainty quantification—which are fundamental to CSS research. We show that direct use of surrogate labels in downstream statistical analyses leads to substantial bias and invalid confidence intervals, even with high surrogate accuracy of 80-90\%. To address this, we build on debiased machine learning to propose the design-based supervised learning (DSL) estimator. DSL employs a doubly-robust procedure to combine surrogate labels with a smaller number of high-quality, gold-standard labels. Our approach guarantees valid inference for downstream statistical analyses, even when surrogates are arbitrarily biased and without requiring stringent assumptions, by controlling the probability of sampling documents for gold-standard labeling. Both our theoretical analysis and experimental results show that DSL provides valid statistical inference while achieving root mean squared errors comparable to existing alternatives that focus only on prediction without inferential guarantees.
|
Using Imperfect Surrogates for Downstream Inference: Design-based Supervised Learning for Social Science Applications of Large Language Models
|
[
"Naoki Egami",
"Musashi Hinck",
"Brandon M. Stewart",
"Hanying Wei"
] |
Conference
|
poster
|
2306.04746
|
[
""
] |
https://huggingface.co/papers/2306.04746
| 1 | 0 | 0 | 4 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=e7MK5Vq44Q
|
@inproceedings{
atanackovic2023dyngfn,
title={Dyn{GFN}: Towards Bayesian Inference of Gene Regulatory Networks with {GF}lowNets},
author={Lazar Atanackovic and Alexander Tong and BO WANG and Leo J Lee and Yoshua Bengio and Jason Hartford},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e7MK5Vq44Q}
}
|
One of the grand challenges of cell biology is inferring the gene regulatory network (GRN) which describes interactions between genes and their products that control gene expression and cellular function. We can treat this as a causal discovery problem but with two non-standard challenges: (1) regulatory networks are inherently cyclic so we should not model a GRN as a directed acyclic graph (DAG), and (2) observations have significant measurement noise so for typical sample sizes, there will always be a large equivalence class of graphs that are likely given the data, and we want methods that capture this uncertainty. Existing methods either focus on challenge (1), identifying cyclic structure from dynamics, or on challenge (2) learning complex Bayesian posteriors over directed acyclic graphs, but not both. In this paper we leverage the fact that it is possible to estimate the ``velocity'' of the expression of a gene with RNA velocity techniques to develop an approach that addresses both challenges. Because we have access to velocity information, we can treat the Bayesian structure learning problem as a problem of sparse identification of a dynamical system, capturing cyclic feedback loops through time. We leverage Generative Flow Networks (GFlowNets) to estimate the posterior distribution over the combinatorial space of possible sparse dependencies. Our results indicate that our method learns posteriors that better encapsulate the distributions of cyclic structures compared to counterpart state-of-the-art Bayesian structure learning approaches.
|
DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with GFlowNets
|
[
"Lazar Atanackovic",
"Alexander Tong",
"BO WANG",
"Leo J Lee",
"Yoshua Bengio",
"Jason Hartford"
] |
Conference
|
poster
|
2302.04178
|
[
"https://github.com/lazaratan/dyn-gfn"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e5srDjF9l7
|
@inproceedings{
wang2023accessing,
title={Accessing Higher Dimensions for Unsupervised Word Translation},
author={Sida Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e5srDjF9l7}
}
|
The striking ability of unsupervised word translation has been demonstrated recently with the help of low-dimensional word vectors / pretraining, which is used by all successful methods and assumed to be necessary. We test and challenge this assumption by developing a method that can also make use of high dimensional signal. Freed from the limits of low dimensions, we show that relying on low-dimensional vectors and their incidental properties miss out on better denoising methods and signals in high dimensions, thus stunting the potential of the data. Our results show that unsupervised translation can be achieved more easily and robustly than previously thought -- less than 80MB and minutes of CPU time is required to achieve over 50\% accuracy for English to Finnish, Hungarian, and Chinese translations when trained in the same domain; even under domain mismatch, the method still works fully unsupervised on English NewsCrawl to Chinese Wikipedia and English Europarl to Spanish Wikipedia, among others. These results challenge prevailing assumptions on the necessity and superiority of low-dimensional vectors and show that the higher dimension signal can be used rather than thrown away.
|
Accessing Higher Dimensions for Unsupervised Word Translation
|
[
"Sida Wang"
] |
Conference
|
poster
|
2305.14200
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e4XidX6AHd
|
@inproceedings{
kleinman2023gacskorner,
title={Gacs-Korner Common Information Variational Autoencoder},
author={Michael Kleinman and Alessandro Achille and Stefano Soatto and Jonathan Kao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e4XidX6AHd}
}
|
We propose a notion of common information that allows one to quantify and separate the information that is shared between two random variables from the information that is unique to each. Our notion of common information is defined by an optimization problem over a family of functions and recovers the G\'acs-K\"orner common information as a special case. Importantly, our notion can be approximated empirically using samples from the underlying data distribution. We then provide a method to partition and quantify the common and unique information using a simple modification of a traditional variational auto-encoder. Empirically, we demonstrate that our formulation allows us to learn semantically meaningful common and unique factors of variation even on high-dimensional data such as images and videos. Moreover, on datasets where ground-truth latent factors are known, we show that we can accurately quantify the common information between the random variables.
|
Gacs-Korner Common Information Variational Autoencoder
|
[
"Michael Kleinman",
"Alessandro Achille",
"Stefano Soatto",
"Jonathan Kao"
] |
Conference
|
poster
|
2205.12239
|
[
"https://github.com/mjkleinman/common-vae"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e2wtjx0Yqu
|
@inproceedings{
jin2023cladder,
title={{CL}adder: A Benchmark to Assess Causal Reasoning Capabilities of Language Models},
author={Zhijing Jin and Yuen Chen and Felix Leeb and Luigi Gresele and Ojasv Kamal and Zhiheng LYU and Kevin Blin and Fernando Gonzalez Adauto and Max Kleiman-Weiner and Mrinmaya Sachan and Bernhard Sch{\"o}lkopf},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e2wtjx0Yqu}
}
|
The ability to perform causal reasoning is widely considered a core feature of intelligence. In this work, we investigate whether large language models (LLMs) can coherently reason about causality. Much of the existing work in natural language processing (NLP) focuses on evaluating _commonsense_ causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined _formal rules_. To address this, we propose a new NLP task, _causal inference in natural language_, inspired by the _"causal inference engine"_ postulated by Judea Pearl et al. We compose a large dataset, CLadder, with 10K samples: based on a collection of causal graphs and queries (associational, interventional, and counterfactual), we obtain symbolic questions and ground-truth answers, through an oracle causal inference engine. These are then translated into natural language. We evaluate multiple LLMs on our dataset, and we introduce and evaluate a bespoke chain-of-thought prompting strategy, CausalCoT. We show that our task is highly challenging for LLMs, and we conduct an in-depth analysis to gain deeper insight into the causal reasoning abilities of LLMs. Our data is open-sourced at https://huggingface.co/datasets/causalNLP/cladder, and our code can be found at https://github.com/causalNLP/cladder.
|
CLadder: Assessing Causal Reasoning in Language Models
|
[
"Zhijing Jin",
"Yuen Chen",
"Felix Leeb",
"Luigi Gresele",
"Ojasv Kamal",
"Zhiheng LYU",
"Kevin Blin",
"Fernando Gonzalez Adauto",
"Max Kleiman-Weiner",
"Mrinmaya Sachan",
"Bernhard Schölkopf"
] |
Conference
|
poster
|
2312.04350
|
[
"https://github.com/causalnlp/cladder"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e2aCgjtjMR
|
@inproceedings{
bharti2023estimating,
title={Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors},
author={Beepul Bharti and Paul Yi and Jeremias Sulam},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e2aCgjtjMR}
}
|
As the use of machine learning models in real world high-stakes decision settings continues to grow, it is highly important that we are able to audit and control for any potential fairness violations these models may exhibit towards certain groups. To do so, one naturally requires access to sensitive attributes, such as demographics, biological sex, or other potentially sensitive features that determine group membership. Unfortunately, in many settings, this information is often unavailable. In this work we study the well known equalized odds (EOD) definition of fairness. In a setting without sensitive attributes, we first provide tight and computable upper bounds for the EOD violation of a predictor. These bounds precisely reflect the worst possible EOD violation. Second, we demonstrate how one can provably control the worst-case EOD by a new post-processing correction method. Our results characterize when directly controlling for EOD with respect to the predicted sensitive attributes is -- and when is not -- optimal when it comes to controlling worst-case EOD. Our results hold under assumptions that are milder than previous works, and we illustrate these results with experiments on synthetic and real datasets.
|
Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors
|
[
"Beepul Bharti",
"Paul Yi",
"Jeremias Sulam"
] |
Conference
|
poster
|
2207.12497
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e2MCL6hObn
|
@inproceedings{
gulrajani2023likelihoodbased,
title={Likelihood-Based Diffusion Language Models},
author={Ishaan Gulrajani and Tatsunori Hashimoto},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e2MCL6hObn}
}
|
Despite a growing interest in diffusion-based language models, existing work has not shown that these models can attain nontrivial likelihoods on standard language modeling benchmarks. In this work, we take the first steps towards closing the likelihood gap between autoregressive and diffusion-based language models, with the goal of building and releasing a diffusion model which outperforms a small but widely-known autoregressive model. We pursue this goal through algorithmic improvements, scaling laws, and increased compute. On the algorithmic front, we introduce several methodological improvements for the maximum-likelihood training of diffusion language models. We then study scaling laws for our diffusion models and find compute-optimal training regimes which differ substantially from autoregressive models. Using our methods and scaling analysis, we train and release Plaid 1B, a large diffusion language model which outperforms GPT-2 124M in likelihood on benchmark datasets and generates fluent samples in unconditional and zero-shot control settings.
|
Likelihood-Based Diffusion Language Models
|
[
"Ishaan Gulrajani",
"Tatsunori Hashimoto"
] |
Conference
|
poster
|
2305.18619
|
[
"https://github.com/igul222/plaid"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e1oe8F2tjV
|
@inproceedings{
tan2023multinomial,
title={Multinomial Logistic Regression: Asymptotic Normality on Null Covariates in High-Dimensions},
author={Kai Tan and Pierre C Bellec},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e1oe8F2tjV}
}
|
This paper investigates the asymptotic distribution of the maximum-likelihood estimate (MLE) in multinomial logistic models in the high-dimensional regime where dimension and sample size are of the same order. While classical large-sample theory provides asymptotic normality of the MLE under certain conditions, such classical results are expected to fail in high-dimensions as documented for the binary logistic case in the seminal work of Sur and Candès [2019]. We address this issue in classification problems with 3 or more classes, by developing asymptotic normality and asymptotic chi-square results for the multinomial logistic MLE (also known as cross-entropy minimizer) on null covariates. Our theory leads to a new methodology to test the significance of a given feature. Extensive simulation studies on synthetic data corroborate these asymptotic results and confirm the validity of proposed p-values for testing the significance of a given feature.
|
Multinomial Logistic Regression: Asymptotic Normality on Null Covariates in High-Dimensions
|
[
"Kai Tan",
"Pierre C Bellec"
] |
Conference
|
poster
|
2305.17825
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e1l4ZYprQH
|
@inproceedings{
qinsi2023mathnas,
title={Math{NAS}: If Blocks Have a Role in Mathematical Architecture Design},
author={Wang Qinsi and Jinghan Ke and Zhi Liang and Sihai Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e1l4ZYprQH}
}
|
Neural Architecture Search (NAS) has emerged as a favoured method for unearthing effective neural architectures.
Recent development of large models has intensified the demand for faster search speeds and more accurate search results.
However, designing large models by NAS is challenging due to the dramatical increase of search space and the associated huge performance evaluation cost.
Consider a typical modular search space widely used in NAS, in which a neural architecture consists of $m$ block nodes and a block node has $n$ alternative blocks.
Facing the space containing $n^m$ candidate networks, existing NAS methods attempt to find the best one by searching and evaluating candidate networks directly.
Different from the general strategy that takes architecture search as a whole problem, we propose a novel divide-and-conquer strategy by making use of the modular nature of the search space.
Here, we introduce MathNAS, a general NAS framework based on mathematical programming.
In MathNAS, the performances of all possible building blocks in the search space are calculated first, and then the performance of a network is directly predicted based on the performances of its building blocks.
Although estimating block performances involves network training, just as what happens for network performance evaluation in existing NAS methods, predicting network performance is completely training-free and thus extremely fast. In contrast to the $n^m$ candidate networks to evaluate in existing NAS methods, which requires training and a formidable computational burden, there are only $m*n$ possible blocks to handle in MathNAS.
Therefore, our approach effectively reduces the complexity of network performance evaluation.
The superiority of MathNAS is validated on multiple large-scale CV and NLP benchmark datasets.
Notably on ImageNet-1k, MathNAS achieves 82.5\% top-1 accuracy, 1.2\% and 0.96\% higher than Swin-T and LeViT-256, respectively.
In addition, when deployed on mobile device, MathNAS achieves real-time search and dynamic network switching within 1s (0.4s on TX2 GPU), surpassing baseline dynamic networks in on-device performance.
|
MathNAS: If Blocks Have a Role in Mathematical Architecture Design
|
[
"Wang Qinsi",
"Jinghan Ke",
"Zhi Liang",
"Sihai Zhang"
] |
Conference
|
poster
|
2311.04943
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=e1WgjvFGWp
|
@inproceedings{
dinh2023large,
title={Large Language Models of Code Fail at Completing Code with Potential Bugs},
author={Tuan Dinh and Jinman Zhao and Samson Tan and Renato Negrinho and Leonard Lausen and Sheng Zha and George Karypis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e1WgjvFGWp}
}
|
Large language models of code (Code-LLMs) have recently brought tremendous advances to code completion, a fundamental feature of programming assistance and code intelligence. However, most existing works ignore the possible presence of bugs in the code context for generation, which are inevitable in software development. Therefore, we introduce and study the buggy-code completion problem, inspired by the realistic scenario of real-time code suggestion where the code context contains potential bugs – anti-patterns that can become bugs in the completed program. To systematically study the task, we introduce two datasets: one with synthetic bugs derived from semantics-altering operator changes (buggy-HumanEval) and one with realistic bugs derived from user submissions to coding problems (buggy-FixEval). We find that the presence of potential bugs significantly degrades the generation performance of the high-performing Code-LLMs. For instance, the passing rates of CODEGEN-2B-MONO on test cases of buggy-HumanEval drop more than 50% given a single potential bug in the context. Finally, we investigate several post-hoc methods for mitigating the adverse effect of potential bugs and find that there remains a large gap in post-mitigation performance.
|
Large Language Models of Code Fail at Completing Code with Potential Bugs
|
[
"Tuan Dinh",
"Jinman Zhao",
"Samson Tan",
"Renato Negrinho",
"Leonard Lausen",
"Sheng Zha",
"George Karypis"
] |
Conference
|
poster
|
2306.03438
|
[
"https://github.com/amazon-science/buggy-code-completion"
] |
https://huggingface.co/papers/2306.03438
| 0 | 2 | 0 | 7 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=e0tt2G8hqf
|
@inproceedings{
wang2023aligning,
title={Aligning Gradient and Hessian for Neural Signed Distance Function},
author={Ruian Wang and Zixiong Wang and Yunxiao Zhang and Shuangmin Chen and Shiqing Xin and Changhe Tu and Wenping Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e0tt2G8hqf}
}
|
The Signed Distance Function (SDF), as an implicit surface representation, provides a crucial method for reconstructing a watertight surface from unorganized point clouds. The SDF has a fundamental relationship with the principles of surface vector calculus. Given a smooth surface, there exists a thin-shell space in which the SDF is differentiable everywhere such that the gradient of the SDF is an eigenvector of its Hessian matrix, with a corresponding eigenvalue of zero. In this paper, we introduce a method to directly learn the SDF from point clouds in the absence of normals. Our motivation is grounded in a fundamental observation: aligning the gradient and the Hessian of the SDF provides a more efficient mechanism to govern gradient directions. This, in turn, ensures that gradient changes more accurately reflect the true underlying variations in shape. Extensive experimental results demonstrate its ability to accurately recover the underlying shape while effectively suppressing the presence of ghost geometry.
|
Aligning Gradient and Hessian for Neural Signed Distance Function
|
[
"Ruian Wang",
"Zixiong Wang",
"Yunxiao Zhang",
"Shuangmin Chen",
"Shiqing Xin",
"Changhe Tu",
"Wenping Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=e0pRF9tOtm
|
@inproceedings{
liu2023private,
title={Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks},
author={Daogao Liu and Arun Ganesh and Sewoong Oh and Abhradeep Guha Thakurta},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=e0pRF9tOtm}
}
|
We reconsider the challenge of non-convex optimization under differential privacy constraint. Building upon the previous variance-reduced algorithm SpiderBoost, we propose a novel framework that employs two types of gradient oracles: one that estimates the gradient at a single point and a more cost-effective option that calculates the gradient difference between two points. Our framework can ensure continuous accuracy of gradient estimations and subsequently enhances the rates of identifying second-order stationary points.
Additionally, we consider a more challenging task by attempting to locate the global minima of a non-convex objective via the exponential mechanism without almost any assumptions. Our preliminary results suggest that the regularized exponential mechanism can effectively emulate previous empirical and population risk bounds, negating the need for smoothness assumptions for algorithms with polynomial running time. Furthermore, with running time factors excluded, the exponential mechanism demonstrates promising population risk bound performance, and we provide a nearly matching lower bound.
|
Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks
|
[
"Daogao Liu",
"Arun Ganesh",
"Sewoong Oh",
"Abhradeep Guha Thakurta"
] |
Conference
|
spotlight
|
2302.09699
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dzqKAM2sKa
|
@inproceedings{
cho2023hypernetworkbased,
title={Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural Networks},
author={Woojin Cho and Kookjin Lee and Donsub Rim and Noseong Park},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dzqKAM2sKa}
}
|
In various engineering and applied science applications, repetitive numerical simulations of partial differential equations (PDEs) for varying input parameters are often required (e.g., aircraft shape optimization over many design parameters) and solvers are required to perform rapid execution. In this study, we suggest a path that potentially opens up a possibility for physics-informed neural networks (PINNs), emerging deep-learning-based solvers, to be considered as one such solver. Although PINNs have pioneered a proper integration of deep-learning and scientific computing, they require repetitive time-consuming training of neural networks, which is not suitable for many-query scenarios. To address this issue, we propose a lightweight low-rank PINNs containing only hundreds of model parameters and an associated hypernetwork-based meta-learning algorithm, which allows efficient approximation of solutions of PDEs for varying ranges of PDE input parameters. Moreover, we show that the proposed method is effective in overcoming a challenging issue, known as "failure modes" of PINNs.
|
Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural Networks
|
[
"Woojin Cho",
"Kookjin Lee",
"Donsub Rim",
"Noseong Park"
] |
Conference
|
spotlight
|
2310.09528
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dz5X8hnfJc
|
@inproceedings{
lu2023characterizing,
title={Characterizing Out-of-Distribution Error via Optimal Transport},
author={Yuzhe Lu and Yilong Qin and Runtian Zhai and Andrew Shen and Ketong Chen and Zhenlin Wang and Soheil Kolouri and Simon Stepputtis and Joseph Campbell and Katia P. Sycara},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dz5X8hnfJc}
}
|
Out-of-distribution (OOD) data poses serious challenges in deployed machine learning models,
so methods of predicting a model's performance on OOD data without labels are important for machine learning safety.
While a number of methods have been proposed by prior work, they often underestimate the actual error, sometimes by a large margin, which greatly impacts their applicability to real tasks. In this work, we identify *pseudo-label shift*, or the difference between the predicted and true OOD label distributions, as a key indicator of this underestimation. Based on this observation, we introduce a novel method for estimating model performance by leveraging optimal transport theory, Confidence Optimal Transport (COT), and show that it provably provides more robust error estimates in the presence of pseudo-label shift. Additionally, we introduce an empirically-motivated variant of COT, Confidence Optimal Transport with Thresholding (COTT), which applies thresholding to the individual transport costs and further improves the accuracy of COT's error estimates. We evaluate COT and COTT on a variety of standard benchmarks that induce various types of distribution shift -- synthetic, novel subpopulation, and natural -- and show that our approaches significantly outperform existing state-of-the-art methods with up to 3x lower prediction errors.
|
Characterizing Out-of-Distribution Error via Optimal Transport
|
[
"Yuzhe Lu",
"Yilong Qin",
"Runtian Zhai",
"Andrew Shen",
"Ketong Chen",
"Zhenlin Wang",
"Soheil Kolouri",
"Simon Stepputtis",
"Joseph Campbell",
"Katia P. Sycara"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dybrsuNAB9
|
@inproceedings{
zhang2023gmsf,
title={{GMSF}: Global Matching Scene Flow},
author={Yushan Zhang and Johan Edstedt and Bastian Wandt and Per-Erik Forssen and Maria Magnusson and Michael Felsberg},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dybrsuNAB9}
}
|
We tackle the task of scene flow estimation from point clouds. Given a source and a target point cloud, the objective is to estimate a translation from each point in the source point cloud to the target, resulting in a 3D motion vector field. Previous dominant scene flow estimation methods require complicated coarse-to-fine or recurrent architectures as a multi-stage refinement. In contrast, we propose a significantly simpler single-scale one-shot global matching to address the problem. Our key finding is that reliable feature similarity between point pairs is essential and sufficient to estimate accurate scene flow. We thus propose to decompose the feature extraction step via a hybrid local-global-cross transformer architecture which is crucial to accurate and robust feature representations. Extensive experiments show that the proposed Global Matching Scene Flow (GMSF) sets a new state-of-the-art on multiple scene flow estimation benchmarks. On FlyingThings3D, with the presence of occlusion points, GMSF reduces the outlier percentage from the previous best performance of 27.4% to 5.6%. On KITTI Scene Flow, without any fine-tuning, our proposed method shows state-of-the-art performance. On the Waymo-Open dataset, the proposed method outperforms previous methods by a large margin. The code is available at https://github.com/ZhangYushan3/GMSF.
|
GMSF: Global Matching Scene Flow
|
[
"Yushan Zhang",
"Johan Edstedt",
"Bastian Wandt",
"Per-Erik Forssen",
"Maria Magnusson",
"Michael Felsberg"
] |
Conference
|
poster
|
2305.17432
|
[
"https://github.com/zhangyushan3/gmsf"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dyXNh5HLq3
|
@inproceedings{
ajay2023compositional,
title={Compositional Foundation Models for Hierarchical Planning},
author={Anurag Ajay and Seungwook Han and Yilun Du and Shuang Li and Abhi Gupta and Tommi S. Jaakkola and Joshua B. Tenenbaum and Leslie Pack Kaelbling and Akash Srivastava and Pulkit Agrawal},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dyXNh5HLq3}
}
|
To make effective decisions in novel environments with long-horizon goals, it is crucial to engage in hierarchical reasoning across spatial and temporal scales. This entails planning abstract subgoal sequences, visually reasoning about the underlying plans, and executing actions in accordance with the devised plan through visual-motor control. We propose Compositional Foundation Models for Hierarchical Planning (HiP), a foundation model which leverages multiple expert foundation model trained on language, vision and action data individually jointly together to solve long-horizon tasks. We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model. Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos. To enable effective reasoning within this hierarchy, we enforce consistency between the models via iterative refinement. We illustrate the efficacy and adaptability of our approach in three different long-horizon table-top manipulation tasks.
|
Compositional Foundation Models for Hierarchical Planning
|
[
"Anurag Ajay",
"Seungwook Han",
"Yilun Du",
"Shuang Li",
"Abhi Gupta",
"Tommi S. Jaakkola",
"Joshua B. Tenenbaum",
"Leslie Pack Kaelbling",
"Akash Srivastava",
"Pulkit Agrawal"
] |
Conference
|
poster
|
2309.08587
|
[
""
] |
https://huggingface.co/papers/2309.08587
| 3 | 9 | 1 | 10 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=dxVN2fZjx6
|
@inproceedings{
howell2023equivariant,
title={Equivariant Single View Pose Prediction Via Induced and Restriction Representations},
author={Owen Lewis Howell and David Klee and Ondrej Biza and Linfeng Zhao and Robin Walters},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dxVN2fZjx6}
}
|
Learning about the three-dimensional world from two-dimensional images is a fundamental problem in computer vision. An ideal neural network architecture for such tasks would leverage the fact that objects can be rotated and translated in three dimensions to make predictions about novel images. However, imposing $SO(3)$-equivariance on two-dimensional inputs is difficult because the group of three-dimensional rotations does not have a natural action on the two-dimensional plane. Specifically, it is possible that an element of $SO(3)$ will rotate an image out of plane. We show that an algorithm that learns a three-dimensional representation of the world from two dimensional images must satisfy certain consistency properties which we formulate as $SO(2)$-equivariance constraints. We use the induced representation of $SO(2)$ on $SO(3)$ to construct and classify architectures that have two-dimensional inputs and
which satisfy these consistency constraints. We prove that any architecture which respects said consistency constraints can be realized as an instance of our construction. We show that three previously proposed neural architectures for 3D pose prediction are special cases of our construction. We propose a new algorithm that is a learnable generalization of previously considered methods. We test our architecture on three pose predictions task and achieve SOTA results on both the PASCAL3D+ and SYMSOL pose estimation tasks.
|
Equivariant Single View Pose Prediction Via Induced and Restriction Representations
|
[
"Owen Lewis Howell",
"David Klee",
"Ondrej Biza",
"Linfeng Zhao",
"Robin Walters"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dxPcdEeQk9
|
@inproceedings{
duan2023fewshot,
title={Few-shot Generation via Recalling Brain-Inspired Episodic-Semantic Memory},
author={Zhibin Duan and Lv Zhiyi and Chaojie Wang and Bo Chen and Bo An and Mingyuan Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dxPcdEeQk9}
}
|
Aimed at adapting a generative model to a novel generation task with only a few given data samples, the capability of few-shot generation is crucial for many real-world applications with limited data, \emph{e.g.}, artistic domains.
Instead of training from scratch, recent works tend to leverage the prior knowledge stored in previous datasets, which is quite similar to the memory mechanism of human intelligence, but few of these works directly imitate the memory-recall mechanism that humans make good use of in accomplishing creative tasks, \emph{e.g.}, painting and writing.
Inspired by the memory mechanism of human brain, in this work, we carefully design a variational structured memory module (VSM), which can simultaneously store both episodic and semantic memories to assist existing generative models efficiently recall these memories during sample generation.
Meanwhile, we introduce a bionic memory updating strategy for the conversion between episodic and semantic memories, which can also model the uncertainty during conversion.
Then, we combine the developed VSM with various generative models under the Bayesian framework, and evaluate these memory-augmented generative models with few-shot generation tasks, demonstrating the effectiveness of our methods.
|
Few-shot Generation via Recalling Brain-Inspired Episodic-Semantic Memory
|
[
"Zhibin Duan",
"Lv Zhiyi",
"Chaojie Wang",
"Bo Chen",
"Bo An",
"Mingyuan Zhou"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dwfHbm8g66
|
@inproceedings{
teed2023deep,
title={Deep Patch Visual Odometry},
author={Zachary Teed and Lahav Lipson and Jia Deng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dwfHbm8g66}
}
|
We propose Deep Patch Visual Odometry (DPVO), a new deep learning system for monocular Visual Odometry (VO). DPVO uses a novel recurrent network architecture designed for tracking image patches across time. Recent approaches to VO have significantly improved the state-of-the-art accuracy by using deep networks to predict dense flow between video frames. However, using dense flow incurs a large computational cost, making these previous methods impractical for many use cases. Despite this, it has been assumed that dense flow is important as it provides additional redundancy against incorrect matches. DPVO disproves this assumption, showing that it is possible to get the best accuracy and efficiency by exploiting the advantages of sparse patch-based matching over dense flow. DPVO introduces a novel recurrent update operator for patch based correspondence coupled with differentiable bundle adjustment. On Standard benchmarks, DPVO outperforms all prior work, including the learning-based state-of-the-art VO-system (DROID) using a third of the memory while running 3x faster on average. Code is available at https://github.com/princeton-vl/DPVO
|
Deep Patch Visual Odometry
|
[
"Zachary Teed",
"Lahav Lipson",
"Jia Deng"
] |
Conference
|
poster
|
2208.04726
|
[
"https://github.com/princeton-vl/dpvo"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dwIeEhbaD0
|
@inproceedings{
torop2023smoothhess,
title={SmoothHess: Re{LU} Network Feature Interactions via Stein's Lemma},
author={Max Torop and Aria Masoomi and Davin Hill and Kivanc Kose and Stratis Ioannidis and Jennifer Dy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dwIeEhbaD0}
}
|
Several recent methods for interpretability model feature interactions by looking at the Hessian of a neural network. This poses a challenge for ReLU networks, which are piecewise-linear and thus have a zero Hessian almost everywhere. We propose SmoothHess, a method of estimating second-order interactions through Stein's Lemma. In particular, we estimate the Hessian of the network convolved with a Gaussian through an efficient sampling algorithm, requiring only network gradient calls. SmoothHess is applied post-hoc, requires no modifications to the ReLU network architecture, and the extent of smoothing can be controlled explicitly. We provide a non-asymptotic bound on the sample complexity of our estimation procedure. We validate the superior ability of SmoothHess to capture interactions on benchmark datasets and a real-world medical spirometry dataset.
|
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
|
[
"Max Torop",
"Aria Masoomi",
"Davin Hill",
"Kivanc Kose",
"Stratis Ioannidis",
"Jennifer Dy"
] |
Conference
|
poster
|
2311.00858
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=du0hvEpgj8
|
@inproceedings{
yu2023actively,
title={Actively Testing Your Model While It Learns: Realizing Label-Efficient Learning in Practice},
author={Dayou Yu and Weishi Shi and Qi Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=du0hvEpgj8}
}
|
In active learning (AL), we focus on reducing the data annotation cost from the model training perspective. However, "testing'', which often refers to the model evaluation process of using empirical risk to estimate the intractable true generalization risk, also requires data annotations. The annotation cost for "testing'' (model evaluation) is under-explored. Even in works that study active model evaluation or active testing (AT), the learning and testing ends are disconnected. In this paper, we propose a novel active testing while learning (ATL) framework that integrates active learning with active testing. ATL provides an unbiased sample-efficient estimation of the model risk during active learning. It leverages test samples annotated from different periods of a dynamic active learning process to achieve fair model evaluations based on a theoretically guaranteed optimal integration of different test samples. Periodic testing also enables effective early-stopping to further save the total annotation cost. ATL further integrates an "active feedback'' mechanism, which is inspired by human learning, where the teacher (active tester) provides immediate guidance given by the prior performance of the student (active learner). Our theoretical result reveals that active feedback maintains the label complexity of the integrated learning-testing objective, while improving the model's generalization capability. We study the realistic setting where we maximize the performance gain from choosing "testing'' samples for feedback without sacrificing the risk estimation accuracy. An agnostic-style analysis and empirical evaluations on real-world datasets demonstrate that the ATL framework can effectively improve the annotation efficiency of both active learning and evaluation tasks.
|
Actively Testing Your Model While It Learns: Realizing Label-Efficient Learning in Practice
|
[
"Dayou Yu",
"Weishi Shi",
"Qi Yu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dsH244r9fA
|
@inproceedings{
tang2023counterfactualaugmented,
title={Counterfactual-Augmented Importance Sampling for Semi-Offline Policy Evaluation},
author={Shengpu Tang and Jenna Wiens},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dsH244r9fA}
}
|
In applying reinforcement learning (RL) to high-stakes domains, quantitative and qualitative evaluation using observational data can help practitioners understand the generalization performance of new policies. However, this type of off-policy evaluation (OPE) is inherently limited since offline data may not reflect the distribution shifts resulting from the application of new policies. On the other hand, online evaluation by collecting rollouts according to the new policy is often infeasible, as deploying new policies in these domains can be unsafe. In this work, we propose a semi-offline evaluation framework as an intermediate step between offline and online evaluation, where human users provide annotations of unobserved counterfactual trajectories. While tempting to simply augment existing data with such annotations, we show that this naive approach can lead to biased results. Instead, we design a new family of OPE estimators based on importance sampling (IS) and a novel weighting scheme that incorporate counterfactual annotations without introducing additional bias. We analyze the theoretical properties of our approach, showing its potential to reduce both bias and variance compared to standard IS estimators. Our analyses reveal important practical considerations for handling biased, noisy, or missing annotations. In a series of proof-of-concept experiments involving bandits and a healthcare-inspired simulator, we demonstrate that our approach outperforms purely offline IS estimators and is robust to imperfect annotations. Our framework, combined with principled human-centered design of annotation solicitation, can enable the application of RL in high-stakes domains.
|
Counterfactual-Augmented Importance Sampling for Semi-Offline Policy Evaluation
|
[
"Shengpu Tang",
"Jenna Wiens"
] |
Conference
|
poster
|
2310.17146
|
[
"https://github.com/mld3/counterfactualannot-semiope"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dqS1GuoG2V
|
@inproceedings{
nickl2023the,
title={The Memory-Perturbation Equation: Understanding Model's Sensitivity to Data},
author={Peter Nickl and Lu Xu and Dharmesh Tailor and Thomas M{\"o}llenhoff and Mohammad Emtiyaz Khan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dqS1GuoG2V}
}
|
Understanding model’s sensitivity to its training data is crucial but can also be challenging and costly, especially during training. To simplify such issues, we present the Memory-Perturbation Equation (MPE) which relates model's sensitivity to perturbation in its training data. Derived using Bayesian principles, the MPE unifies existing sensitivity measures, generalizes them to a wide-variety of models and algorithms, and unravels useful properties regarding sensitivities. Our empirical results show that sensitivity estimates obtained during training can be used to faithfully predict generalization on unseen test data. The proposed equation is expected to be useful for future research on robust and adaptive learning.
|
The Memory-Perturbation Equation: Understanding Model's Sensitivity to Data
|
[
"Peter Nickl",
"Lu Xu",
"Dharmesh Tailor",
"Thomas Möllenhoff",
"Mohammad Emtiyaz Khan"
] |
Conference
|
poster
|
[
"https://github.com/team-approx-bayes/memory-perturbation"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dpdbbN7AKr
|
@inproceedings{
rabbani2023largescale,
title={Large-Scale Distributed Learning via Private On-Device {LSH}},
author={Tahseen Rabbani and Marco Bornstein and Furong Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dpdbbN7AKr}
}
|
Locality-sensitive hashing (LSH) based frameworks have been used efficiently to select weight vectors in a dense hidden layer with high cosine similarity to an input, enabling dynamic pruning.
While this type of scheme has been shown to improve computational training efficiency, existing algorithms require repeated randomized projection of the full layer weight, which is impractical for computational- and memory-constrained devices.
In a distributed setting, deferring LSH analysis to a centralized host is (i) slow if the device cluster is large and (ii) requires access to input data which is forbidden in a federated context.
Using a new family of hash functions, we develop the first private, personalized, and memory-efficient on-device LSH framework.
Our framework enables privacy and personalization by allowing each device to generate hash tables, without the help of a central host, using device-specific hashing hyper-parameters (e.g., number of hash tables or hash length).
Hash tables are generated with a compressed set of the full weights, and can be serially generated and discarded if the process is memory-intensive.
This allows devices to avoid maintaining (i) the fully-sized model and (ii) large amounts of hash tables in local memory for LSH analysis. We prove several statistical and sensitivity properties of our hash functions, and experimentally demonstrate that our framework is competitive in training large scale recommender networks compared to other LSH frameworks which assume unrestricted on-device capacity.
|
Large-Scale Distributed Learning via Private On-Device LSH
|
[
"Tahseen Rabbani",
"Marco Bornstein",
"Furong Huang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=doWqIXcRlq
|
@inproceedings{
fan2023revisit,
title={Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language Perspective},
author={Yingying Fan and Yu Wu and Bo Du and Yutian Lin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=doWqIXcRlq}
}
|
We focus on the weakly-supervised audio-visual video parsing task (AVVP), which aims to identify and locate all the events in audio/visual modalities. Previous works only concentrate on video-level overall label denoising across modalities, but overlook the segment-level label noise, where adjacent video segments (i.e., 1-second video clips) may contain different events. However, recognizing events on the segment is challenging because its label could be any combination of events that occur in the video. To address this issue, we consider tackling AVVP from the language perspective, since language could freely describe how various events appear in each segment beyond fixed labels. Specifically, we design language prompts to describe all cases of event appearance for each video. Then, the similarity between language prompts and segments is calculated, where the event of the most similar prompt is regarded as the segment-level label. In addition, to deal with the mislabeled segments, we propose to perform dynamic re-weighting on the unreliable segments to adjust their labels. Experiments show that our simple yet effective approach outperforms state-of-the-art methods by a large margin.
|
Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language Perspective
|
[
"Yingying Fan",
"Yu Wu",
"Bo Du",
"Yutian Lin"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dnGEPkmnzO
|
@inproceedings{
bhattacharya2023fully,
title={Fully Dynamic \$k\$-Clustering in \${\textbackslash}tilde O(k)\$ Update Time},
author={Sayan Bhattacharya and Martin Costa and Silvio Lattanzi and Nikos Parotsidis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dnGEPkmnzO}
}
|
We present a $O(1)$-approximate fully dynamic algorithm for the $k$-median and $k$-means problems on metric spaces with amortized update time $\tilde O(k)$ and worst-case query time $\tilde O(k^2)$. We complement our theoretical analysis with the first in-depth experimental study for the dynamic $k$-median problem on general metrics, focusing on comparing our dynamic algorithm to the current state-of-the-art by Henzinger and Kale [ESA'20]. Finally, we also provide a lower bound for dynamic $k$-median which shows that any $O(1)$-approximate algorithm with $\tilde O(\text{poly}(k))$ query time must have $\tilde \Omega(k)$ amortized update time, even in the incremental setting.
|
Fully Dynamic k-Clustering in Õ(k) Update Time
|
[
"Sayan Bhattacharya",
"Martin Costa",
"Silvio Lattanzi",
"Nikos Parotsidis"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dnB71DMyDD
|
@inproceedings{
schmidt2023the,
title={The Rank-Reduced Kalman Filter: Approximate Dynamical-Low-Rank Filtering In High Dimensions},
author={Jonathan Schmidt and Philipp Hennig and J{\"o}rg Nick and Filip Tronarp},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dnB71DMyDD}
}
|
Inference and simulation in the context of high-dimensional dynamical systems remain computationally challenging problems.
Some form of dimensionality reduction is required to make the problem tractable in general.
In this paper, we propose a novel approximate Gaussian filtering and smoothing method
which propagates low-rank approximations of the covariance matrices.
This is accomplished by projecting the Lyapunov equations associated with the prediction step to a manifold of low-rank matrices,
which are then solved by a recently developed, numerically stable, dynamical low-rank integrator.
Meanwhile, the update steps are made tractable by noting that the covariance update only transforms the column space of the covariance matrix, which is low-rank by construction.
The algorithm differentiates itself from existing ensemble-based approaches in that
the low-rank approximations of the covariance matrices are deterministic, rather than stochastic.
Crucially, this enables the method to reproduce the exact Kalman filter as the low-rank dimension approaches the true dimensionality of the problem.
Our method reduces computational complexity from cubic (for the Kalman filter) to quadratic in the state-space size in the worst-case, and can achieve linear complexity if the state-space model satisfies certain criteria.
Through a set of experiments in classical data-assimilation and spatio-temporal regression, we show that the proposed method consistently outperforms the ensemble-based methods in terms of error in the mean and covariance with respect to the exact Kalman filter. This comes at no additional cost in terms of asymptotic computational complexity.
|
The Rank-Reduced Kalman Filter: Approximate Dynamical-Low-Rank Filtering In High Dimensions
|
[
"Jonathan Schmidt",
"Philipp Hennig",
"Jörg Nick",
"Filip Tronarp"
] |
Conference
|
poster
|
2306.07774
|
[
"https://github.com/schmidtjonathan/rrkf.jl"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dmD63sv0TZ
|
@inproceedings{
olko2023trust,
title={Trust Your \${\textbackslash}nabla\$: Gradient-based Intervention Targeting for Causal Discovery},
author={Mateusz Olko and Micha{\l} Zaj{\k{a}}c and Aleksandra Nowak and Nino Scherrer and Yashas Annadani and Stefan Bauer and {\L}ukasz Kuci{\'n}ski and Piotr Mi{\l}o{\'s}},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dmD63sv0TZ}
}
|
Inferring causal structure from data is a challenging task of fundamental importance in science. Often, observational data alone is not enough to uniquely identify a system’s causal structure. The use of interventional data can address this issue, however, acquiring these samples typically demands a considerable investment of time and physical or financial resources. In this work, we are concerned with the acquisition of interventional data in a targeted manner to minimize the number of required experiments. We propose a novel Gradient-based Intervention Targeting method, abbreviated GIT, that ’trusts’ the gradient estimator of a gradient-based causal discovery framework to provide signals for the intervention targeting function. We provide extensive experiments in simulated and real-world datasets and demonstrate that GIT performs on par with competitive baselines, surpassing them in the low-data regime.
|
Trust Your ∇: Gradient-based Intervention Targeting for Causal Discovery
|
[
"Mateusz Olko",
"Michał Zając",
"Aleksandra Nowak",
"Nino Scherrer",
"Yashas Annadani",
"Stefan Bauer",
"Łukasz Kuciński",
"Piotr Miłoś"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dlDFakG6kJ
|
@inproceedings{
lin2023sample,
title={Sample Complexity of Forecast Aggregation},
author={Tao Lin and Yiling Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dlDFakG6kJ}
}
|
We consider a Bayesian forecast aggregation model where $n$ experts, after observing private signals about an unknown binary event, report their posterior beliefs about the event to a principal, who then aggregates the reports into a single prediction for the event. The signals of the experts and the outcome of the event follow a joint distribution that is unknown to the principal, but the principal has access to i.i.d. "samples" from the distribution, where each sample is a tuple of the experts' reports (not signals) and the realization of the event. Using these samples, the principal aims to find an $\varepsilon$-approximately optimal aggregator, where optimality is measured in terms of the expected squared distance between the aggregated prediction and the realization of the event. We show that the sample complexity of this problem is at least $\tilde \Omega(m^{n-2} / \varepsilon)$ for arbitrary discrete distributions, where $m$ is the size of each expert's signal space. This sample complexity grows exponentially in the number of experts $n$. But, if the experts' signals are independent conditioned on the realization of the event, then the sample complexity is significantly reduced, to $\tilde O(1 / \varepsilon^2)$, which does not depend on $n$. Our results can be generalized to non-binary events. The proof of our results uses a reduction from the distribution learning problem and reveals the fact that forecast aggregation is almost as difficult as distribution learning.
|
Sample Complexity of Forecast Aggregation
|
[
"Tao Lin",
"Yiling Chen"
] |
Conference
|
spotlight
|
2207.13126
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=djyn8Q0anK
|
@inproceedings{
li2023scalable,
title={Scalable Transformer for {PDE} Surrogate Modeling},
author={Zijie Li and Dule Shu and Amir Barati Farimani},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=djyn8Q0anK}
}
|
Transformer has shown state-of-the-art performance on various applications and has recently emerged as a promising tool for surrogate modeling of partial differential equations (PDEs). Despite the introduction of linear-complexity attention, applying Transformer to problems with a large number of grid points can be numerically unstable and computationally expensive. In this work, we propose Factorized Transformer (FactFormer), which is based on an axial factorized kernel integral. Concretely, we introduce a learnable projection operator that decomposes the input function into multiple sub-functions with one-dimensional domain. These sub-functions are then evaluated and used to compute the instance-based kernel with an axial factorized scheme. We showcase that the proposed model is able to simulate 2D Kolmogorov flow on a $256\times 256$ grid and 3D smoke buoyancy on a $64\times64\times64$ grid with good accuracy and efficiency. The proposed factorized scheme can serve as a computationally efficient low-rank surrogate for the full attention scheme when dealing with multi-dimensional problems.
|
Scalable Transformer for PDE Surrogate Modeling
|
[
"Zijie Li",
"Dule Shu",
"Amir Barati Farimani"
] |
Conference
|
poster
|
2305.17560
|
[
"https://github.com/BaratiLab/FactFormer"
] |
https://huggingface.co/papers/2305.17560
| 0 | 0 | 0 | 3 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=dikH9tdPi2
|
@inproceedings{
li2023improving,
title={Improving Adversarial Transferability via Intermediate-level Perturbation Decay},
author={Qizhang Li and Yiwen Guo and Wangmeng Zuo and Hao Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dikH9tdPi2}
}
|
Intermediate-level attacks that attempt to perturb feature representations following an adversarial direction drastically have shown favorable performance in crafting transferable adversarial examples. Existing methods in this category are normally formulated with two separate stages, where a directional guide is required to be determined at first and the scalar projection of the intermediate-level perturbation onto the directional guide is enlarged thereafter. The obtained perturbation deviates from the guide inevitably in the feature space, and it is revealed in this paper that such a deviation may lead to sub-optimal attack. To address this issue, we develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization. In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously. In-depth discussion verifies the effectiveness of our method. Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on ImageNet (+10.07% on average) and CIFAR-10 (+3.88% on average). Our code is at https://github.com/qizhangli/ILPD-attack.
|
Improving Adversarial Transferability via Intermediate-level Perturbation Decay
|
[
"Qizhang Li",
"Yiwen Guo",
"Wangmeng Zuo",
"Hao Chen"
] |
Conference
|
poster
|
2304.13410
|
[
"https://github.com/qizhangli/ilpd-attack"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=deaHiTb6Cu
|
@inproceedings{
bharadwaj2023fast,
title={Fast Exact Leverage Score Sampling from Khatri-Rao Products with Applications to Tensor Decomposition},
author={Vivek Bharadwaj and Osman Asif Malik and Riley Murray and Laura Grigori and Aydin Buluc and James Demmel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=deaHiTb6Cu}
}
|
We present a data structure to randomly sample rows from the Khatri-Rao product of several matrices according to the exact distribution of its leverage scores. Our proposed sampler draws each row in time logarithmic in the height of the Khatri-Rao product and quadratic in its column count, with persistent space overhead at most the size of the input matrices. As a result, it tractably draws samples even when the matrices forming the Khatri-Rao product have tens of millions of rows each. When used to sketch the linear least-squares problems arising in Candecomp / PARAFAC decomposition, our method achieves lower asymptotic complexity per solve than recent state-of-the-art methods. Experiments on billion-scale sparse tensors and synthetic data validate our theoretical claims, with our algorithm achieving higher accuracy than competing methods as the decomposition rank grows.
|
Fast Exact Leverage Score Sampling from Khatri-Rao Products with Applications to Tensor Decomposition
|
[
"Vivek Bharadwaj",
"Osman Asif Malik",
"Riley Murray",
"Laura Grigori",
"Aydin Buluc",
"James Demmel"
] |
Conference
|
poster
|
2301.12584
|
[
"https://github.com/vbharadwaj-bk/fast_tensor_leverage"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ddKCg3OhGw
|
@inproceedings{
farrugia-roberts2023functional,
title={Functional Equivalence and Path Connectivity of Reducible Hyperbolic Tangent Networks},
author={Matthew Farrugia-Roberts},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ddKCg3OhGw}
}
|
Understanding the learning process of artificial neural networks requires clarifying the structure of the parameter space within which learning takes place. A neural network parameter's functional equivalence class is the set of parameters implementing the same input--output function. For many architectures, almost all parameters have a simple and well-documented functional equivalence class. However, there is also a vanishing minority of reducible parameters, with richer functional equivalence classes caused by redundancies among the network's units.
In this paper, we give an algorithmic characterisation of unit redundancies and reducible functional equivalence classes for a single-hidden-layer hyperbolic tangent architecture. We show that such functional equivalence classes are piecewise-linear path-connected sets, and that for parameters with a majority of redundant units, the sets have a diameter of at most 7 linear segments.
|
Functional Equivalence and Path Connectivity of Reducible Hyperbolic Tangent Networks
|
[
"Matthew Farrugia-Roberts"
] |
Conference
|
poster
|
2305.05089
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dd3KNayGFz
|
@inproceedings{
chien2023differentially,
title={Differentially Private Decoupled Graph Convolutions for Multigranular Topology Protection},
author={Eli Chien and Wei-Ning Chen and Chao Pan and Pan Li and Ayfer Ozgur and Olgica Milenkovic},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dd3KNayGFz}
}
|
Graph Neural Networks (GNNs) have proven to be highly effective in solving real-world learning problems that involve graph-structured data. However, GNNs can also inadvertently expose sensitive user information and interactions through their model predictions. To address these privacy concerns, Differential Privacy (DP) protocols are employed to control the trade-off between provable privacy protection and model utility. Applying standard DP approaches to GNNs directly is not advisable due to two main reasons. First, the prediction of node labels, which relies on neighboring node attributes through graph convolutions, can lead to privacy leakage. Second, in practical applications, the privacy requirements for node attributes and graph topology may differ. In the latter setting, existing DP-GNN models fail to provide multigranular trade-offs between graph topology privacy, node attribute privacy, and GNN utility. To address both limitations, we propose a new framework termed Graph Differential Privacy (GDP), specifically tailored to graph learning. GDP ensures both provably private model parameters as well as private predictions. Additionally, we describe a novel unified notion of graph dataset adjacency to analyze the properties of GDP for different levels of graph topology privacy. Our findings reveal that DP-GNNs, which rely on graph convolutions, not only fail to meet the requirements for multigranular graph topology privacy but also necessitate the injection of DP noise that scales at least linearly with the maximum node degree. In contrast, our proposed Differentially Private Decoupled Graph Convolutions (DPDGCs) represent a more flexible and efficient alternative to graph convolutions that still provides the necessary guarantees of GDP. To validate our approach, we conducted extensive experiments on seven node classification benchmarking and illustrative synthetic datasets. The results demonstrate that DPDGCs significantly outperform existing DP-GNNs in terms of privacy-utility trade-offs.
|
Differentially Private Decoupled Graph Convolutions for Multigranular Topology Protection
|
[
"Eli Chien",
"Wei-Ning Chen",
"Chao Pan",
"Pan Li",
"Ayfer Ozgur",
"Olgica Milenkovic"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dcw7qRUuD8
|
@inproceedings{
lippert2023deep,
title={Deep Gaussian Markov Random Fields for Graph-Structured Dynamical Systems},
author={Fiona Lippert and Bart Kranstauber and E. Emiel van Loon and Patrick Forr{\'e}},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dcw7qRUuD8}
}
|
Probabilistic inference in high-dimensional state-space models is computationally challenging. For many spatiotemporal systems, however, prior knowledge about the dependency structure of state variables is available. We leverage this structure to develop a computationally efficient approach to state estimation and learning in graph-structured state-space models with (partially) unknown dynamics and limited historical data. Building on recent methods that combine ideas from deep learning with principled inference in Gaussian Markov random fields (GMRF), we reformulate graph-structured state-space models as Deep GMRFs defined by simple spatial and temporal graph layers. This results in a flexible spatiotemporal prior that can be learned efficiently from a single time sequence via variational inference. Under linear Gaussian assumptions, we retain a closed-form posterior, which can be sampled efficiently using the conjugate gradient method, scaling favourably compared to classical Kalman filter based approaches.
|
Deep Gaussian Markov Random Fields for Graph-Structured Dynamical Systems
|
[
"Fiona Lippert",
"Bart Kranstauber",
"E. Emiel van Loon",
"Patrick Forré"
] |
Conference
|
poster
|
2306.08445
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dbVRDk2wt7
|
@inproceedings{
demirel2023finding,
title={Finding Order in Chaos: A Novel Data Augmentation Method for Time Series in Contrastive Learning},
author={Berken Utku Demirel and Christian Holz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dbVRDk2wt7}
}
|
The success of contrastive learning is well known to be dependent on data augmentation.
Although the degree of data augmentations has been well controlled by utilizing pre-defined techniques in some domains like vision, time-series data augmentation is less explored and remains a challenging problem due to the complexity of the data generation mechanism, such as the intricate mechanism involved in the cardiovascular system.
Moreover, there is no widely recognized and general time-series augmentation method that can be applied across different tasks.
In this paper, we propose a novel data augmentation method for time-series tasks that aims to connect intra-class samples together, and thereby find order in the latent space.
Our method builds upon the well-known data augmentation technique of mixup by incorporating a novel approach that accounts for the non-stationary nature of time-series data.
Also, by controlling the degree of chaos created by data augmentation, our method leads to improved feature representations and performance on downstream tasks.
We evaluate our proposed method on three time-series tasks, including heart rate estimation, human activity recognition, and cardiovascular disease detection.
Extensive experiments against the state-of-the-art methods show that the proposed method outperforms prior works on optimal data generation and known data augmentation techniques in three tasks, reflecting the effectiveness of the presented method.
The source code is available at double-blind policy.
|
Finding Order in Chaos: A Novel Data Augmentation Method for Time Series in Contrastive Learning
|
[
"Berken Utku Demirel",
"Christian Holz"
] |
Conference
|
poster
|
2309.13439
|
[
"https://github.com/eth-siplab/Finding_Order_in_Chaos"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dZqcC1qCmB
|
@inproceedings{
osband2023epistemic,
title={Epistemic Neural Networks},
author={Ian Osband and Zheng Wen and Seyed Mohammad Asghari and Vikranth Dwaracherla and Morteza Ibrahimi and Xiuyuan Lu and Benjamin Van Roy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dZqcC1qCmB}
}
|
Intelligence relies on an agent's knowledge of what it does not know.
This capability can be assessed based on the quality of joint predictions of labels across multiple inputs.
In principle, ensemble-based approaches can produce effective joint predictions, but the computational costs of large ensembles become prohibitive.
We introduce the epinet: an architecture that can supplement any conventional neural network, including large pretrained models, and can be trained with modest incremental computation to estimate uncertainty.
With an epinet, conventional neural networks outperform very large ensembles, consisting of hundreds or more particles, with orders of magnitude less computation.
The epinet does not fit the traditional framework of Bayesian neural networks.
To accommodate development of approaches beyond BNNs, such as the epinet, we introduce the epistemic neural network (ENN) as a general interface for models that produce joint predictions.
|
Epistemic Neural Networks
|
[
"Ian Osband",
"Zheng Wen",
"Seyed Mohammad Asghari",
"Vikranth Dwaracherla",
"Morteza Ibrahimi",
"Xiuyuan Lu",
"Benjamin Van Roy"
] |
Conference
|
spotlight
|
[
"https://github.com/deepmind/neural_testbed"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dYeUvLUxBQ
|
@inproceedings{
gao2023causal,
title={Causal Discovery in Semi-Stationary Time Series},
author={Shanyun Gao and Raghavendra Addanki and Tong Yu and Ryan A. Rossi and Murat Kocaoglu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dYeUvLUxBQ}
}
|
Discovering causal relations from observational time series without making the stationary assumption is a significant challenge. In practice, this challenge is common in many areas, such as retail sales, transportation systems, and medical science. Here, we consider this problem for a class of non-stationary time series. The structural causal model (SCM) of this type of time series, called the semi-stationary time series, exhibits that a finite number of different causal mechanisms occur sequentially and periodically across time. This model holds considerable practical utility because it can represent periodicity, including common occurrences such as seasonality and diurnal variation. We propose a constraint-based, non-parametric algorithm for discovering causal relations in this setting. The resulting algorithm, PCMCI$_{\Omega}$, can capture the alternating and recurring changes in the causal mechanisms and then identify the underlying causal graph with conditional independence (CI) tests. We show that this algorithm is sound in identifying causal relations on discrete time series. We validate the algorithm with extensive experiments on continuous and discrete simulated data. We also apply our algorithm to a real-world climate dataset.
|
Causal Discovery in Semi-Stationary Time Series
|
[
"Shanyun Gao",
"Raghavendra Addanki",
"Tong Yu",
"Ryan A. Rossi",
"Murat Kocaoglu"
] |
Conference
|
poster
|
2407.07291
|
[
"https://github.com/causalml-lab/pcmci-omega"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dX9MjUtP1A
|
@inproceedings{
hvarfner2023selfcorrecting,
title={Self-Correcting Bayesian Optimization through Bayesian Active Learning},
author={Carl Hvarfner and Erik Orm Hellsten and Frank Hutter and Luigi Nardi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dX9MjUtP1A}
}
|
Gaussian processes are the model of choice in Bayesian optimization and active learning. Yet, they are highly dependent on cleverly chosen hyperparameters to reach their full potential, and little effort is devoted to finding good hyperparameters in the literature. We demonstrate the impact of selecting good hyperparameters for GPs and present two acquisition functions that explicitly prioritize hyperparameter learning. Statistical distance-based Active Learning (SAL) considers the average disagreement between samples from the posterior, as measured by a statistical distance. SAL outperforms the state-of-the-art in Bayesian active learning on several test functions. We then introduce Self-Correcting Bayesian Optimization (SCoreBO), which extends SAL to perform Bayesian optimization and active learning simultaneously. SCoreBO learns the model hyperparameters at improved rates compared to vanilla BO, while outperforming the latest Bayesian optimization methods on traditional benchmarks. Moreover, we demonstrate the importance of self-correction on atypical Bayesian optimization tasks.
|
Self-Correcting Bayesian Optimization through Bayesian Active Learning
|
[
"Carl Hvarfner",
"Erik Orm Hellsten",
"Frank Hutter",
"Luigi Nardi"
] |
Conference
|
poster
|
2304.11005
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dWDEBW2raJ
|
@inproceedings{
shi2023train,
title={Train Faster, Perform Better: Modular Adaptive Training in Over-Parameterized Models},
author={Yubin Shi and Yixuan Chen and Mingzhi Dong and Xiaochen Yang and Dongsheng Li and Yujiang Wang and Robert P. Dick and Qin Lv and Yingying Zhao and Fan Yang and Tun Lu and Ning Gu and Li Shang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dWDEBW2raJ}
}
|
Despite their prevalence in deep-learning communities, over-parameterized models convey high demands of computational costs for proper training. This work studies the fine-grained, modular-level learning dynamics of over-parameterized models to attain a more efficient and fruitful training strategy. Empirical evidence reveals that when scaling down into network modules, such as heads in self-attention models, we can observe varying learning patterns implicitly associated with each module's trainability. To describe such modular-level learning capabilities, we introduce a novel concept dubbed modular neural tangent kernel (mNTK), and we demonstrate that the quality of a module's learning is tightly associated with its mNTK's principal eigenvalue $\lambda_{\max}$. A large $\lambda_{\max}$ indicates that the module learns features with better convergence, while those miniature ones may impact generalization negatively. Inspired by the discovery, we propose a novel training strategy termed Modular Adaptive Training (MAT) to update those modules with their $\lambda_{\max}$ exceeding a dynamic threshold selectively, concentrating the model on learning common features and ignoring those inconsistent ones. Unlike most existing training schemes with a complete BP cycle across all network modules, MAT can significantly save computations by its partially-updating strategy and can further improve performance. Experiments show that MAT nearly halves the computational cost of model training and outperforms the accuracy of baselines.
|
Train Faster, Perform Better: Modular Adaptive Training in Over-Parameterized Models
|
[
"Yubin Shi",
"Yixuan Chen",
"Mingzhi Dong",
"Xiaochen Yang",
"Dongsheng Li",
"Yujiang Wang",
"Robert P. Dick",
"Qin Lv",
"Yingying Zhao",
"Fan Yang",
"Tun Lu",
"Ning Gu",
"Li Shang"
] |
Conference
|
poster
|
2405.07527
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dVnhdm9MIg
|
@inproceedings{
ellis2023humanlike,
title={Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language},
author={Kevin Ellis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dVnhdm9MIg}
}
|
A core tension in models of concept learning is that the model must carefully balance the tractability of inference against the expressivity of the hypothesis class. Humans, however, can efficiently learn a broad range of concepts.
We introduce a model of inductive learning that seeks to be human-like in that sense.
It implements a Bayesian reasoning process where a language model first proposes candidate hypotheses expressed in natural language, which are then re-weighed by a prior and a likelihood.
By estimating the prior from human data, we can predict human judgments on learning problems involving numbers and sets, spanning concepts that are generative, discriminative, propositional, and higher-order.
|
Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language
|
[
"Kevin Ellis"
] |
Conference
|
oral
|
2306.02797
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dUAcAtCuKk
|
@inproceedings{
chen2023reckoning,
title={{RECKONING}: Reasoning through Dynamic Knowledge Encoding},
author={Zeming Chen and Gail Weiss and Eric Mitchell and Asli Celikyilmaz and Antoine Bosselut},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dUAcAtCuKk}
}
|
Recent studies on transformer-based language models show that they can answer questions by reasoning over knowledge provided as part of the context (i.e., in-context reasoning). However, since the available knowledge is often not filtered for a particular question, in-context reasoning can be sensitive to distractor facts, additional content that is irrelevant to a question but that may be relevant for a different question (i.e., not necessarily random noise). In these situations, the model fails to
distinguish the necessary knowledge to answer the question, leading to spurious reasoning and degraded performance. This reasoning failure contrasts with the model’s apparent ability to distinguish its contextual knowledge from all the knowledge it has memorized during pre-training. Following this observation, we propose teaching the model to reason more robustly by folding the provided contextual knowledge into the model’s parameters before presenting it with a question. Our method, RECKONING, is a bi-level learning algorithm that teaches language models to reason by updating their parametric knowledge through back-propagation, allowing them to answer questions using the updated parameters. During training, the inner loop rapidly adapts a copy of the model weights to encode contextual knowledge into its parameters. In the outer loop, the model learns to use the updated weights to reproduce and answer reasoning questions about the memorized knowledge. Our experiments on three diverse multi-hop reasoning datasets show that RECKONING’s performance improves over the in-context reasoning baseline (by up to 4.5%). We also find that compared to in-context reasoning, RECKONING generalizes better to longer reasoning chains unseen during training, is more robust to distractors in the context, and is computationally more efficient when multiple questions are asked about the same knowledge.
|
RECKONING: Reasoning through Dynamic Knowledge Encoding
|
[
"Zeming Chen",
"Gail Weiss",
"Eric Mitchell",
"Asli Celikyilmaz",
"Antoine Bosselut"
] |
Conference
|
poster
|
2305.06349
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dTj5tH94xv
|
@inproceedings{
le2023does,
title={Does a sparse Re{LU} network training problem always admit an optimum ?},
author={TUNG QUOC LE and R{\'e}mi Gribonval and Elisa Riccietti},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dTj5tH94xv}
}
|
Given a training set, a loss function, and a neural network architecture, it is often taken for granted that optimal network parameters exist, and a common practice is to apply available optimization algorithms to search for them. In this work, we show that the existence of an optimal solution is not always guaranteed, especially in the context of sparse ReLU neural networks.
In particular, we first show that optimization problems involving deep networks with certain sparsity patterns do not always have optimal parameters, and that optimization algorithms may then diverge. Via a new topological relation between sparse ReLU neural networks and their linear counterparts, we derive --using existing tools from real algebraic geometry-- an algorithm to verify that a given sparsity pattern suffers from this issue. Then, the existence of a global optimum is proved for every concrete optimization problem involving
a shallow sparse ReLU neural network of output dimension one. Overall, the analysis is based on the investigation of two topological properties of the space of functions implementable as sparse ReLU neural networks: a best approximation property, and a closedness property, both in the uniform norm. This is studied both for (finite) domains corresponding to practical training on finite training sets, and for more general domains such as the unit cube. This allows us to provide conditions for the guaranteed existence of an optimum given a sparsity pattern. The results apply not only to several sparsity patterns proposed in recent works on network pruning/sparsification, but also to classical dense neural networks, including architectures not covered by existing results.
|
Does a sparse ReLU network training problem always admit an optimum ?
|
[
"TUNG QUOC LE",
"Rémi Gribonval",
"Elisa Riccietti"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dSRyKIYRnP
|
@inproceedings{
lin2023slow,
title={Slow and Weak Attractor Computation Embedded in Fast and Strong E-I Balanced Neural Dynamics},
author={Xiaohan Lin and Liyuan Li and Boxin Shi and Tiejun Huang and Yuanyuan Mi and Si Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dSRyKIYRnP}
}
|
Attractor networks require neuronal connections to be highly structured in order to maintain attractor states that represent information, while excitation and inhibition balanced networks (E-INNs) require neuronal connections to be random and sparse to generate irregular neuronal firings. Despite being regarded as canonical models of neural circuits, both types of networks are usually studied in isolation, and it remains unclear how they coexist in the brain, given their very different structural demands. In this study, we investigate the compatibility of continuous attractor neural networks (CANNs) and E-INNs. In line with recent experimental data, we find that a neural circuit can exhibit both the traits of CANNs and E-INNs if the neuronal synapses consist of two sets: one set is strong and fast for irregular firing, and the other set is weak and slow for attractor dynamics. Our results from simulations and theoretical analysis reveal that the network also exhibits enhanced performance compared to the case of using only one set of synapses, with accelerated convergence of attractor states and retained E-I balanced condition for localized input. We also apply the network model to solve a real-world tracking problem and demonstrate that it can track fast-moving objects well. We hope that this study provides insight into how structured neural computations are realized by irregular firings of neurons.
|
Slow and Weak Attractor Computation Embedded in Fast and Strong E-I Balanced Neural Dynamics
|
[
"Xiaohan Lin",
"Liyuan Li",
"Boxin Shi",
"Tiejun Huang",
"Yuanyuan Mi",
"Si Wu"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dR6p49RYLq
|
@inproceedings{
li2023neuralgf,
title={Neural{GF}: Unsupervised Point Normal Estimation by Learning Neural Gradient Function},
author={Qing Li and Huifang Feng and Kanle Shi and Yue Gao and Yi Fang and Yu-Shen Liu and Zhizhong Han},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dR6p49RYLq}
}
|
Normal estimation for 3D point clouds is a fundamental task in 3D geometry processing. The state-of-the-art methods rely on priors of fitting local surfaces learned from normal supervision. However, normal supervision in benchmarks comes from synthetic shapes and is usually not available from real scans, thereby limiting the learned priors of these methods. In addition, normal orientation consistency across shapes remains difficult to achieve without a separate post-processing procedure. To resolve these issues, we propose a novel method for estimating oriented normals directly from point clouds without using ground truth normals as supervision. We achieve this by introducing a new paradigm for learning neural gradient functions, which encourages the neural network to fit the input point clouds and yield unit-norm gradients at the points. Specifically, we introduce loss functions to facilitate query points to iteratively reach the moving targets and aggregate onto the approximated surface, thereby learning a global surface representation of the data. Meanwhile, we incorporate gradients into the surface approximation to measure the minimum signed deviation of queries, resulting in a consistent gradient field associated with the surface. These techniques lead to our deep unsupervised oriented normal estimator that is robust to noise, outliers and density variations. Our excellent results on widely used benchmarks demonstrate that our method can learn more accurate normals for both unoriented and oriented normal estimation tasks than the latest methods. The source code and pre-trained model are publicly available.
|
NeuralGF: Unsupervised Point Normal Estimation by Learning Neural Gradient Function
|
[
"Qing Li",
"Huifang Feng",
"Kanle Shi",
"Yue Gao",
"Yi Fang",
"Yu-Shen Liu",
"Zhizhong Han"
] |
Conference
|
poster
|
2311.00389
|
[
"https://github.com/leoqli/neuralgf"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dQLsvKNwZC
|
@inproceedings{
wachi2023safe,
title={Safe Exploration in Reinforcement Learning: A Generalized Formulation and Algorithms},
author={Akifumi Wachi and Wataru Hashimoto and Xun Shen and Kazumune Hashimoto},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dQLsvKNwZC}
}
|
Safe exploration is essential for the practical use of reinforcement learning (RL) in many real-world scenarios. In this paper, we present a generalized safe exploration (GSE) problem as a unified formulation of common safe exploration problems. We then propose a solution of the GSE problem in the form of a meta-algorithm for safe exploration, MASE, which combines an unconstrained RL algorithm with an uncertainty quantifier to guarantee safety in the current episode while properly penalizing unsafe explorations before actual safety violation to discourage them in future episodes. The advantage of MASE is that we can optimize a policy while guaranteeing with a high probability that no safety constraint will be violated under proper assumptions. Specifically, we present two variants of MASE with different constructions of the uncertainty quantifier: one based on generalized linear models with theoretical guarantees of safety and near-optimality, and another that combines a Gaussian process to ensure safety with a deep RL algorithm to maximize the reward. Finally, we demonstrate that our proposed algorithm achieves better performance than state-of-the-art algorithms on grid-world and Safety Gym benchmarks without violating any safety constraints, even during training.
|
Safe Exploration in Reinforcement Learning: A Generalized Formulation and Algorithms
|
[
"Akifumi Wachi",
"Wataru Hashimoto",
"Xun Shen",
"Kazumune Hashimoto"
] |
Conference
|
poster
|
2310.03225
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dOxm4FnMFu
|
@inproceedings{
dhurandhar2023locally,
title={Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning},
author={Amit Dhurandhar and Karthikeyan Natesan Ramamurthy and Kartik Ahuja and Vijay Arya},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dOxm4FnMFu}
}
|
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods used to explain black-box models at a per example level. Although many variants have been proposed, few provide a simple way to produce high fidelity explanations that are also stable and intuitive. In this work, we provide a novel perspective by proposing a model agnostic local explanation method inspired by the invariant risk minimization (IRM) principle -- originally proposed for (global) out-of-distribution generalization -- to provide such high fidelity explanations that are also stable and unidirectional across nearby examples. Our method is based on a game theoretic formulation where we theoretically show that our approach has a strong tendency to eliminate features where the gradient of the black-box function abruptly changes sign in the locality of the example we want to explain, while in other cases it is more careful and will choose a more conservative (feature) attribution, a behavior which can be highly desirable for recourse. Empirically, we show on tabular, image and text data that the quality of our explanations with neighborhoods formed using random perturbations are much better than LIME and in some cases even comparable to other methods that use realistic neighbors sampled from the data manifold. This is desirable given that learning a manifold to either create realistic neighbors or to project explanations is typically expensive or may even be impossible. Moreover, our algorithm is simple and efficient to train, and can ascertain stable input features for local decisions of a black-box without access to side information such as a (partial) causal graph as has been seen in some recent works.
|
Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning
|
[
"Amit Dhurandhar",
"Karthikeyan Natesan Ramamurthy",
"Kartik Ahuja",
"Vijay Arya"
] |
Conference
|
poster
|
2201.12143
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dOanKg3jKS
|
@inproceedings{
roman2023from,
title={From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion},
author={Robin San Roman and Yossi Adi and Antoine Deleforge and Romain Serizel and Gabriel Synnaeve and Alexandre D{\'e}fossez},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dOanKg3jKS}
}
|
Deep generative models can generate high-fidelity audio conditioned on various
types of representations (e.g., mel-spectrograms, Mel-frequency Cepstral Coefficients
(MFCC)). Recently, such models have been used to synthesize audio
waveforms conditioned on highly compressed representations. Although such
methods produce impressive results, they are prone to generate audible artifacts
when the conditioning is flawed or imperfect. An alternative modeling approach is
to use diffusion models. However, these have mainly been used as speech vocoders
(i.e., conditioned on mel-spectrograms) or generating relatively low sampling
rate signals. In this work, we propose a high-fidelity multi-band diffusion-based
framework that generates any type of audio modality (e.g., speech, music, environmental
sounds) from low-bitrate discrete representations. At equal bit rate,
the proposed approach outperforms state-of-the-art generative techniques in terms
of perceptual quality. Training and evaluation code are available on the facebookresearch/
audiocraft github project. Samples are available on the following
link (https://ai.honu.io/papers/mbd/).
|
From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion
|
[
"Robin San Roman",
"Yossi Adi",
"Antoine Deleforge",
"Romain Serizel",
"Gabriel Synnaeve",
"Alexandre Défossez"
] |
Conference
|
poster
|
2308.02560
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dLmDPVv19z
|
@inproceedings{
zhang2023constrained,
title={Constrained Policy Optimization with Explicit Behavior Density For Offline Reinforcement Learning},
author={Jing Zhang and Chi Zhang and Wenjia Wang and Bingyi Jing},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dLmDPVv19z}
}
|
Due to the inability to interact with the environment, offline reinforcement learning (RL) methods face the challenge of estimating the Out-of-Distribution (OOD) points. Existing methods for addressing this issue either control policy to exclude the OOD action or make the $Q$ function pessimistic. However, these methods can be overly conservative or fail to identify OOD areas accurately. To overcome this problem, we propose a Constrained Policy optimization with Explicit Behavior density (CPED) method that utilizes a flow-GAN model to explicitly estimate the density of behavior policy. By estimating the explicit density, CPED can accurately identify the safe region and enable exploration within the region, resulting in less conservative learning policies. We further provide theoretical results for both the flow-GAN estimator and performance guarantee for CPED by showing that CPED can find the optimal $Q$-function value. Empirically, CPED outperforms existing alternatives on various standard offline reinforcement learning tasks, yielding higher expected returns.
|
Constrained Policy Optimization with Explicit Behavior Density For Offline Reinforcement Learning
|
[
"Jing Zhang",
"Chi Zhang",
"Wenjia Wang",
"Bingyi Jing"
] |
Conference
|
poster
|
2301.12130
|
[
"https://github.com/evalarzj/cped"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dL0GM9Wwtq
|
@inproceedings{
spiess2023double,
title={Double and Single Descent in Causal Inference with an Application to High-Dimensional Synthetic Control},
author={Jann Spiess and Guido Imbens and Amar Venugopal},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dL0GM9Wwtq}
}
|
Motivated by a recent literature on the double-descent phenomenon in machine learning, we consider highly over-parameterized models in causal inference, including synthetic control with many control units. In such models, there may be so many free parameters that the model fits the training data perfectly. We first investigate high-dimensional linear regression for imputing wage data and estimating average treatment effects, where we find that models with many more covariates than sample size can outperform simple ones. We then document the performance of high-dimensional synthetic control estimators with many control units. We find that adding control units can help improve imputation performance even beyond the point where the pre-treatment fit is perfect. We provide a unified theoretical perspective on the performance of these high-dimensional models. Specifically, we show that more complex models can be interpreted as model-averaging estimators over simpler ones, which we link to an improvement in average performance. This perspective yields concrete insights into the use of synthetic control when control units are many relative to the number of pre-treatment periods.
|
Double and Single Descent in Causal Inference with an Application to High-Dimensional Synthetic Control
|
[
"Jann Spiess",
"Guido Imbens",
"Amar Venugopal"
] |
Conference
|
poster
|
2305.00700
|
[
"https://github.com/amarvenu/causal-descent"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dKeWh6EzBB
|
@inproceedings{
kim2023swift,
title={Swi{FT}: Swin 4D f{MRI} Transformer},
author={Peter Yongho Kim and Junbeom Kwon and Sunghwan Joo and Sangyoon Bae and Donggyu Lee and Yoonho Jung and Shinjae Yoo and Jiook Cha and Taesup Moon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dKeWh6EzBB}
}
|
Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI.
|
SwiFT: Swin 4D fMRI Transformer
|
[
"Peter Yongho Kim",
"Junbeom Kwon",
"Sunghwan Joo",
"Sangyoon Bae",
"Donggyu Lee",
"Yoonho Jung",
"Shinjae Yoo",
"Jiook Cha",
"Taesup Moon"
] |
Conference
|
poster
|
2307.05916
|
[
"https://github.com/transconnectome/swift"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dK0Ew3kkVf
|
@inproceedings{
yang2023keypointaugmented,
title={Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation},
author={Zhangsihao Yang and Mengwei Ren and Kaize Ding and Guido Gerig and Yalin Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dK0Ew3kkVf}
}
|
Pretraining CNN models (i.e., UNet) through self-supervision has become a powerful approach to facilitate medical image segmentation under low annotation regimes. Recent contrastive learning methods encourage similar global representations when the same image undergoes different transformations, or enforce invariance across different image/patch features that are intrinsically correlated. However, CNN-extracted global and local features are limited in capturing long-range spatial dependencies that are essential in biological anatomy. To this end, we present a keypoint-augmented fusion layer that extracts representations preserving both short- and long-range self-attention. In particular, we augment the CNN feature map at multiple scales by incorporating an additional input that learns long-range spatial self-attention among localized keypoint features. Further, we introduce both global and local self-supervised pretraining for the framework. At the global scale, we obtain global representations from both the bottleneck of the UNet, and by aggregating multiscale keypoint features. These global features are subsequently regularized through image-level contrastive objectives. At the local scale, we define a distance-based criterion to first establish correspondences among keypoints and encourage similarity between their features. Through extensive experiments on both MRI and CT segmentation tasks, we demonstrate the architectural advantages of our proposed method in comparison to both CNN and Transformer-based UNets, when all architectures are trained with randomly initialized weights. With our proposed pretraining strategy, our method further outperforms existing SSL methods by producing more robust self-attention and achieving state-of-the-art segmentation results. The code is available at https://github.com/zshyang/kaf.git.
|
Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation
|
[
"Zhangsihao Yang",
"Mengwei Ren",
"Kaize Ding",
"Guido Gerig",
"Yalin Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dJZ3MvDw86
|
@inproceedings{
feder2023causalstructure,
title={Causal-structure Driven Augmentations for Text {OOD} Generalization},
author={Amir Feder and Yoav Wald and Claudia Shi and Suchi Saria and David Blei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dJZ3MvDw86}
}
|
The reliance of text classifiers on spurious correlations can lead to poor generalization at deployment, raising concerns about their use in safety-critical domains such as healthcare. In this work, we propose to use counterfactual data augmentation, guided by knowledge of the causal structure of the data, to simulate interventions on spurious features and to learn more robust text classifiers. We show that this strategy is appropriate in prediction problems where the label is spuriously correlated with an attribute. Under the assumptions of such problems, we discuss the favorable sample complexity of counterfactual data augmentation, compared to importance re-weighting. Pragmatically, we match examples using auxiliary data, based on diff-in-diff methodology, and use a large language model (LLM) to represent a conditional probability of text. Through extensive experimentation on learning caregiver-invariant predictors of clinical diagnoses from medical narratives and on semi-synthetic data, we demonstrate that our method for simulating interventions improves out-of-distribution (OOD) accuracy compared to baseline invariant learning algorithms.
|
Data Augmentations for Improved (Large) Language Model Generalization
|
[
"Amir Feder",
"Yoav Wald",
"Claudia Shi",
"Suchi Saria",
"David Blei"
] |
Conference
|
poster
|
2310.12803
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dHQ2av9NzO
|
@inproceedings{
kim2023on,
title={On the Convergence of Black-Box Variational Inference},
author={Kyurae Kim and Jisu Oh and Kaiwen Wu and Yian Ma and Jacob R. Gardner},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dHQ2av9NzO}
}
|
We provide the first convergence guarantee for black-box variational inference (BBVI) with the reparameterization gradient.
While preliminary investigations worked on simplified versions of BBVI (e.g., bounded domain, bounded support, only optimizing for the scale, and such), our setup does not need any such algorithmic modifications.
Our results hold for log-smooth posterior densities with and without strong log-concavity and the location-scale variational family.
Notably, our analysis reveals that certain algorithm design choices commonly employed in practice, such as nonlinear parameterizations of the scale matrix, can result in suboptimal convergence rates.
Fortunately, running BBVI with proximal stochastic gradient descent fixes these limitations and thus achieves the strongest known convergence guarantees.
We evaluate this theoretical insight by comparing proximal SGD against other standard implementations of BBVI on large-scale Bayesian inference problems.
|
On the Convergence of Black-Box Variational Inference
|
[
"Kyurae Kim",
"Jisu Oh",
"Kaiwen Wu",
"Yian Ma",
"Jacob R. Gardner"
] |
Conference
|
poster
|
2305.15349
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dHF3Im8Aic
|
@inproceedings{
qu2023lmc,
title={{LMC}: Large Model Collaboration with Cross-assessment for Training-Free Open-Set Object Recognition},
author={Haoxuan Qu and Xiaofei Hui and Yujun Cai and Jun Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dHF3Im8Aic}
}
|
Open-set object recognition aims to identify if an object is from a class that has been encountered during training or not. To perform open-set object recognition accurately, a key challenge is how to reduce the reliance on spurious-discriminative features. In this paper, motivated by that different large models pre-trained through different paradigms can possess very rich while distinct implicit knowledge, we propose a novel framework named Large Model Collaboration (LMC) to tackle the above challenge via collaborating different off-the-shelf large models in a training-free manner. Moreover, we also incorporate the proposed framework with several novel designs to effectively extract implicit knowledge from large models. Extensive experiments demonstrate the efficacy of our proposed framework. Code is available \href{https://github.com/Harryqu123/LMC}{here}.
|
LMC: Large Model Collaboration with Cross-assessment for Training-Free Open-Set Object Recognition
|
[
"Haoxuan Qu",
"Xiaofei Hui",
"Yujun Cai",
"Jun Liu"
] |
Conference
|
poster
|
2309.12780
|
[
"https://github.com/harryqu123/lmc"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dFtpRphNb3
|
@inproceedings{
miehling2023cookie,
title={Cookie Consent Has Disparate Impact on Estimation Accuracy},
author={Erik Miehling and Rahul Nair and Elizabeth M. Daly and Karthikeyan Natesan Ramamurthy and Robert Nelson Redmond},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dFtpRphNb3}
}
|
Cookies are designed to enable more accurate identification and tracking of user behavior, in turn allowing for more personalized ads and better performing ad campaigns. Given the additional information that is recorded, questions related to privacy and fairness naturally arise. How does a user's consent decision influence how much the system can learn about their demographic and tastes? Is the impact of a user's consent decision on the recommender system's ability to learn about their latent attributes uniform across demographics? We investigate these questions in the context of an engagement-driven recommender system using simulation. We empirically demonstrate that when consent rates exhibit demographic-dependence, user consent has a disparate impact on the recommender agent's ability to estimate users' latent attributes. In particular, we find that when consent rates are demographic-dependent, a user disagreeing to share their cookie may counter-intuitively cause the recommender agent to know more about the user than if the user agreed to share their cookie. Furthermore, the gap in base consent rates across demographics serves as an amplifier: users from the lower consent rate demographic who agree to cookie sharing generally experience higher estimation errors than the same users from the higher consent rate demographic, and conversely for users who choose to disagree to cookie sharing, with these differences increasing in consent rate gap. We discuss the need for new notions of fairness that encourage consistency between a user's privacy decisions and the system's ability to estimate their latent attributes.
|
Cookie Consent Has Disparate Impact on Estimation Accuracy
|
[
"Erik Miehling",
"Rahul Nair",
"Elizabeth M. Daly",
"Karthikeyan Natesan Ramamurthy",
"Robert Nelson Redmond"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dFSeZm6dTC
|
@inproceedings{
hu2023cpslam,
title={{CP}-{SLAM}: Collaborative Neural Point-based {SLAM} System},
author={Jiarui Hu and Mao Mao and Hujun Bao and Guofeng Zhang and Zhaopeng Cui},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dFSeZm6dTC}
}
|
This paper presents a collaborative implicit neural simultaneous localization and mapping (SLAM) system with RGB-D image sequences, which consists of complete front-end and back-end modules including odometry, loop detection, sub-map fusion, and global refinement. In order to enable all these modules in a unified framework, we propose a novel neural point based 3D scene representation in which each point maintains a learnable neural feature for scene encoding and is associated with a certain keyframe. Moreover, a distributed-to-centralized learning strategy is proposed for the collaborative implicit SLAM to improve consistency and cooperation. A novel global optimization framework is also proposed to improve the system accuracy like traditional bundle adjustment. Experiments on various datasets demonstrate the superiority of the proposed method in both camera tracking and mapping.
|
CP-SLAM: Collaborative Neural Point-based SLAM System
|
[
"Jiarui Hu",
"Mao Mao",
"Hujun Bao",
"Guofeng Zhang",
"Zhaopeng Cui"
] |
Conference
|
poster
|
2311.08013
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dEySGIcDnI
|
@inproceedings{
cho2023separable,
title={Separable Physics-Informed Neural Networks},
author={Junwoo Cho and Seungtae Nam and Hyunmo Yang and Seok-Bae Yun and Youngjoon Hong and Eunbyung Park},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dEySGIcDnI}
}
|
Physics-informed neural networks (PINNs) have recently emerged as promising data-driven PDE solvers showing encouraging results on various PDEs.
However, there is a fundamental limitation of training PINNs to solve multi-dimensional PDEs and approximate very complex solution functions.
The number of training points (collocation points) required on these challenging PDEs grows substantially, and it is severely limited due to the expensive computational costs and heavy memory overhead.
To overcome this limit, we propose a network architecture and training algorithm for PINNs.
The proposed method, separable PINN (SPINN), operates on a per-axis basis to decrease the number of network propagations in multi-dimensional PDEs instead of point-wise processing in conventional PINNs.
We also propose using forward-mode automatic differentiation to reduce the computational cost of computing PDE residuals, enabling a large number of collocation points ($>10^7$) on a single commodity GPU.
The experimental results show significantly reduced computational costs ($62\times$ in wall-clock time, $1,394\times$ in FLOPs given the same number of collocation points) in multi-dimensional PDEs while achieving better accuracy.
Furthermore, we present that SPINN can solve a chaotic (2+1)-d Navier-Stokes equation much faster than the best-performing prior method (9 minutes vs. 10 hours in a single GPU), maintaining accuracy.
Finally, we showcase that SPINN can accurately obtain the solution of a highly nonlinear and multi-dimensional PDE, a (3+1)-d Navier-Stokes equation.
For visualized results and code, please see https://jwcho5576.github.io/spinn.github.io/.
|
Separable Physics-Informed Neural Networks
|
[
"Junwoo Cho",
"Seungtae Nam",
"Hyunmo Yang",
"Seok-Bae Yun",
"Youngjoon Hong",
"Eunbyung Park"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dEDdRWunxU
|
@inproceedings{
cai2023fedco,
title={Fed-{CO}\$\_\{2\}\$: Cooperation of Online and Offline Models for Severe Data Heterogeneity in Federated Learning},
author={Zhongyi Cai and Ye Shi and Wei Huang and Jingya Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dEDdRWunxU}
}
|
Federated Learning (FL) has emerged as a promising distributed learning paradigm that enables multiple clients to learn a global model collaboratively without sharing their private data. However, the effectiveness of FL is highly dependent on the quality of the data that is being used for training. In particular, data heterogeneity issues, such as label distribution skew and feature skew, can significantly impact the performance of FL. Previous studies in FL have primarily focused on addressing label distribution skew data heterogeneity, while only a few recent works have made initial progress in tackling feature skew issues. Notably, these two forms of data heterogeneity have been studied separately and have not been well explored within a unified FL framework. To address this gap, we propose Fed-CO$_2$, a universal FL framework that handles both label distribution skew and feature skew within a Cooperation mechanism between the Online and Offline models. Specifically, the online model learns general knowledge that is shared among all clients, while the offline model is trained locally to learn the specialized knowledge of each individual client. To further enhance model cooperation in the presence of feature shifts, we design an intra-client knowledge transfer mechanism that reinforces mutual learning between the online and offline models, and an inter-client knowledge transfer mechanism to increase the models’ domain generalization ability. Extensive experiments show that our Fed-CO$_2$ outperforms a wide range of existing personalized federated learning algorithms in terms of handling label distribution skew and feature skew, both individually and collectively. The empirical results are supported by our convergence analyses in a simplified setting.
|
Fed-CO_2: Cooperation of Online and Offline Models for Severe Data Heterogeneity in Federated Learning
|
[
"Zhongyi Cai",
"Ye Shi",
"Wei Huang",
"Jingya Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dDk6URGRXP
|
@inproceedings{
hihat2023online,
title={Online Inventory Problems: Beyond the i.i.d. Setting with Online Convex Optimization},
author={Massil HIHAT and St{\'e}phane Ga{\"\i}ffas and Guillaume Garrigos and Simon Bussy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dDk6URGRXP}
}
|
We study multi-product inventory control problems where a manager makes sequential replenishment decisions based on partial historical information in order to minimize its cumulative losses. Our motivation is to consider general demands, losses and dynamics to go beyond standard models which usually rely on newsvendor-type losses, fixed dynamics, and unrealistic i.i.d. demand assumptions. We propose MaxCOSD, an online algorithm that has provable guarantees even for problems with non-i.i.d. demands and stateful dynamics, including for instance perishability. We consider what we call non-degeneracy assumptions on the demand process, and argue that they are necessary to allow learning.
|
Online Inventory Problems: Beyond the i.i.d. Setting with Online Convex Optimization
|
[
"Massil HIHAT",
"Stéphane Gaïffas",
"Guillaume Garrigos",
"Simon Bussy"
] |
Conference
|
poster
|
2307.06048
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dCYBAGQXLo
|
@inproceedings{
lee2023supervised,
title={Supervised Pretraining Can Learn In-Context Reinforcement Learning},
author={Jonathan Lee and Annie Xie and Aldo Pacchiano and Yash Chandak and Chelsea Finn and Ofir Nachum and Emma Brunskill},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dCYBAGQXLo}
}
|
Large transformer models trained on diverse datasets have shown a remarkable ability to learn in-context, achieving high few-shot performance on tasks they were not explicitly trained to solve. In this paper, we study the in-context learning capabilities of transformers in decision-making problems, i.e., reinforcement learning (RL) for bandits and Markov decision processes. To do so, we introduce and study the Decision-Pretrained Transformer (DPT), a supervised pretraining method where a transformer predicts an optimal action given a query state and an in-context dataset of interactions from a diverse set of tasks. While simple, this procedure produces a model with several surprising capabilities. We find that the trained transformer can solve a range of RL problems in-context, exhibiting both exploration online and conservatism offline, despite not being explicitly trained to do so. The model also generalizes beyond the pretraining distribution to new tasks and automatically adapts its decision-making strategies to unknown structure. Theoretically, we show DPT can be viewed as an efficient implementation of Bayesian posterior sampling, a provably sample-efficient RL algorithm. We further leverage this connection to provide guarantees on the regret of the in-context algorithm yielded by DPT, and prove that it can learn faster than algorithms used to generate the pretraining data. These results suggest a promising yet simple path towards instilling strong in-context decision-making abilities in transformers.
|
Supervised Pretraining Can Learn In-Context Reinforcement Learning
|
[
"Jonathan Lee",
"Annie Xie",
"Aldo Pacchiano",
"Yash Chandak",
"Chelsea Finn",
"Ofir Nachum",
"Emma Brunskill"
] |
Conference
|
spotlight
|
2306.14892
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dCAk9VlegR
|
@inproceedings{
ma2023this,
title={This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations},
author={Chiyu Ma and Brandon Zhao and Chaofan Chen and Cynthia Rudin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dCAk9VlegR}
}
|
We present ProtoConcepts, a method for interpretable image classification combining deep learning and case-based reasoning using prototypical parts. Existing work in prototype-based image classification uses a "this looks like that'' reasoning process, which dissects a test image by finding prototypical parts and combining evidence from these prototypes to make a final classification. However, all of the existing prototypical part-based image classifiers provide only one-to-one comparisons, where a single training image patch serves as a prototype to compare with a part of our test image. With these single-image comparisons, it can often be difficult to identify the underlying concept being compared (e.g., "is it comparing the color or the shape?''). Our proposed method modifies the architecture of prototype-based networks to instead learn prototypical concepts which are visualized using multiple image patches. Having multiple visualizations of the same prototype allows us to more easily identify the concept captured by that prototype (e.g., "the test image and the related training patches are all the same shade of blue''), and allows our model to create richer, more interpretable visual explanations. Our experiments show that our ``this looks like those'' reasoning process can be applied as a modification to a wide range of existing prototypical image classification networks while achieving comparable accuracy on benchmark datasets.
|
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations
|
[
"Chiyu Ma",
"Brandon Zhao",
"Chaofan Chen",
"Cynthia Rudin"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dB4lvScPIj
|
@inproceedings{
lan2023smooseg,
title={SmooSeg: Smoothness Prior for Unsupervised Semantic Segmentation},
author={Mengcheng Lan and Xinjiang Wang and Yiping Ke and Jiaxing Xu and Litong Feng and Wayne Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dB4lvScPIj}
}
|
Unsupervised semantic segmentation is a challenging task that segments images into semantic groups without manual annotation. Prior works have primarily focused on leveraging prior knowledge of semantic consistency or priori concepts from self-supervised learning methods, which often overlook the coherence property of image segments. In this paper, we demonstrate that the smoothness prior, asserting that close features in a metric space share the same semantics, can significantly simplify segmentation by casting unsupervised semantic segmentation as an energy minimization problem. Under this paradigm, we propose a novel approach called SmooSeg that harnesses self-supervised learning methods to model the closeness relationships among observations as smoothness signals. To effectively discover coherent semantic segments, we introduce a novel smoothness loss that promotes piecewise smoothness within segments while preserving discontinuities across different segments. Additionally, to further enhance segmentation quality, we design an asymmetric teacher-student style predictor that generates smoothly updated pseudo labels, facilitating an optimal fit between observations and labeling outputs. Thanks to the rich supervision cues of the smoothness prior, our SmooSeg significantly outperforms STEGO in terms of pixel accuracy on three datasets: COCOStuff (+14.9\%), Cityscapes (+13.0\%), and Potsdam-3 (+5.7\%).
|
SmooSeg: Smoothness Prior for Unsupervised Semantic Segmentation
|
[
"Mengcheng Lan",
"Xinjiang Wang",
"Yiping Ke",
"Jiaxing Xu",
"Litong Feng",
"Wayne Zhang"
] |
Conference
|
poster
|
2310.17874
|
[
"https://github.com/mc-lan/smooseg"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=dAbGv5Jz5U
|
@inproceedings{
zhang2023contrastive,
title={Contrastive Sampling Chains in Diffusion Models},
author={Junyu Zhang and Daochang Liu and Shichao Zhang and Chang Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dAbGv5Jz5U}
}
|
The past few years have witnessed great success in the use of diffusion models (DMs) to generate high-fidelity images with the help of stochastic differential equations (SDEs). However, discretization error is an inevitable limitation when utilizing numerical solvers to solve SDEs. To address this limitation, we provide a theoretical analysis demonstrating that an appropriate combination of the contrastive loss and score matching serves as an upper bound of the KL divergence between the true data distribution and the model distribution. To obtain this bound, we utilize a contrastive loss to construct a contrastive sampling chain to fine-tuning the pre-trained DM. In this manner, our method reduces the discretization error and thus yields a smaller gap between the true data distribution and our model distribution. Moreover, the presented method can be applied to fine-tuning various pre-trained DMs, both with or without fast sampling algorithms, contributing to better sample quality or slightly faster sampling speeds. To validate the efficacy of our method, we conduct comprehensive experiments. For example, on CIFAR10, when applied to a pre-trained EDM, our method improves the FID from 2.04 to 1.88 with 35 neural function evaluations (NFEs), and reduces NFEs from 35 to 25 to achieve the same 2.04 FID.
|
Contrastive Sampling Chains in Diffusion Models
|
[
"Junyu Zhang",
"Daochang Liu",
"Shichao Zhang",
"Chang Xu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=dAJrxQz1lk
|
@inproceedings{
zhu2023anet,
title={A*Net: A Scalable Path-based Reasoning Approach for Knowledge Graphs},
author={Zhaocheng Zhu and Xinyu Yuan and Mikhail Galkin and Sophie Xhonneux and Ming Zhang and Maxime Gazeau and Jian Tang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=dAJrxQz1lk}
}
|
Reasoning on large-scale knowledge graphs has been long dominated by embedding methods. While path-based methods possess the inductive capacity that embeddings lack, their scalability is limited by the exponential number of paths. Here we present A\*Net, a scalable path-based method for knowledge graph reasoning. Inspired by the A\* algorithm for shortest path problems, our A\*Net learns a priority function to select important nodes and edges at each iteration, to reduce time and memory footprint for both training and inference. The ratio of selected nodes and edges can be specified to trade off between performance and efficiency. Experiments on both transductive and inductive knowledge graph reasoning benchmarks show that A\*Net achieves competitive performance with existing state-of-the-art path-based methods, while merely visiting 10% nodes and 10% edges at each iteration. On a million-scale dataset ogbl-wikikg2, A\*Net not only achieves a new state-of-the-art result, but also converges faster than embedding methods. A\*Net is the first path-based method for knowledge graph reasoning at such scale.
|
A*Net: A Scalable Path-based Reasoning Approach for Knowledge Graphs
|
[
"Zhaocheng Zhu",
"Xinyu Yuan",
"Mikhail Galkin",
"Sophie Xhonneux",
"Ming Zhang",
"Maxime Gazeau",
"Jian Tang"
] |
Conference
|
poster
|
2206.04798
|
[
"https://github.com/deepgraphlearning/astarnet"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d8j3lsBWpV
|
@inproceedings{
kurtic2023ziplm,
title={Zip{LM}: Inference-Aware Structured Pruning of Language Models},
author={Eldar Kurtic and Elias Frantar and Dan Alistarh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d8j3lsBWpV}
}
|
The breakthrough performance of large language models (LLMs) comes with major computational footprints and high deployment costs. In this paper, we progress towards resolving this problem by proposing a novel structured compression approach for LLMs, called ZipLM. ZipLM achieves state-of-the-art accuracy-vs-speedup, while matching a set of desired target runtime speedups in any given inference environment. Specifically, given a model, a dataset, an inference environment, as well as a set of speedup targets, ZipLM iteratively identifies and removes components with the worst loss-runtime trade-off. Unlike prior methods that specialize in either the *post-training/one-shot* or the *gradual compression* setting, and only for specific families of models such as BERT (*encoder*) or GPT (*decoder*), ZipLM produces state-of-the-art compressed models across all these settings. Furthermore, ZipLM achieves superior results for a fraction of the computational cost relative to prior distillation and pruning techniques, making it a cost-effective approach for generating an entire family of smaller, faster, and highly accurate models, guaranteed to meet the desired inference specifications. In particular, ZipLM outperforms all prior BERT-base distillation and pruning techniques, such as CoFi, MiniLM, and TinyBERT. Moreover, it matches the performance of the heavily optimized MobileBERT model, obtained via extensive architecture search, by simply pruning the baseline BERT-large model. When compressing GPT2, ZipLM outperforms DistilGPT2 while being 60\% smaller and 30\% faster. Our code is available at: https://github.com/IST-DASLab/ZipLM.
|
ZipLM: Inference-Aware Structured Pruning of Language Models
|
[
"Eldar Kurtic",
"Elias Frantar",
"Dan Alistarh"
] |
Conference
|
poster
|
2302.04089
|
[
"https://github.com/ist-daslab/ziplm"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d86B6Mdweq
|
@inproceedings{
ge2023d,
title={3D Copy-Paste: Physically Plausible Object Insertion for Monocular 3D Detection},
author={Yunhao Ge and Hong-Xing Yu and Cheng Zhao and Yuliang Guo and Xinyu Huang and Liu Ren and Laurent Itti and Jiajun Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d86B6Mdweq}
}
|
A major challenge in monocular 3D object detection is the limited diversity and quantity of objects in real datasets. While augmenting real scenes with virtual objects holds promise to improve both the diversity and quantity of the objects, it remains elusive due to the lack of an effective 3D object insertion method in complex real captured scenes. In this work, we study augmenting complex real indoor scenes with virtual objects for monocular 3D object detection. The main challenge is to automatically identify plausible physical properties for virtual assets (e.g., locations, appearances, sizes, etc.) in cluttered real scenes. To address this challenge, we propose a physically plausible indoor 3D object insertion approach to automatically copy virtual objects and paste them into real scenes. The resulting objects in scenes have 3D bounding boxes with plausible physical locations and appearances. In particular, our method first identifies physically feasible locations and poses for the inserted objects to prevent collisions with the existing room layout. Subsequently, it estimates spatially-varying illumination for the insertion location, enabling the immersive blending of the virtual objects into the original scene with plausible appearances and cast shadows. We show that our augmentation method significantly improves existing monocular 3D object models and achieves state-of-the-art performance. For the first time, we demonstrate that a physically plausible 3D object insertion, serving as a generative data augmentation technique, can lead to significant improvements for discriminative downstream tasks such as monocular 3D object detection. Project website: https://gyhandy.github.io/3D-Copy-Paste/.
|
3D Copy-Paste: Physically Plausible Object Insertion for Monocular 3D Detection
|
[
"Yunhao Ge",
"Hong-Xing Yu",
"Cheng Zhao",
"Yuliang Guo",
"Xinyu Huang",
"Liu Ren",
"Laurent Itti",
"Jiajun Wu"
] |
Conference
|
poster
|
2312.05277
|
[
"https://github.com/gyhandy/3d-copy-paste"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d85pPNBHLt
|
@inproceedings{
sun2023metaadam,
title={Meta-AdaM: An Meta-Learned Adaptive Optimizer with Momentum for Few-Shot Learning},
author={Siyuan Sun and Hongyang Gao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d85pPNBHLt}
}
|
We introduce Meta-AdaM, a meta-learned adaptive optimizer with momentum, designed for few-shot learning tasks that pose significant challenges to deep learning models due to the limited number of labeled examples. Meta-learning has been successfully employed to address these challenges by transferring meta-learned prior knowledge to new tasks. Most existing works focus on meta-learning an optimal model initialization or an adaptive learning rate learner for rapid convergence. However, these approaches either neglect to consider weight-update history for the adaptive learning rate learner or fail to effectively integrate momentum for fast convergence, as seen in many-shot learning settings. To tackle these limitations, we propose a meta-learned learning rate learner that utilizes weight-update history as input to predict more appropriate learning rates for rapid convergence. Furthermore, for the first time, our approach incorporates momentum into the optimization process of few-shot learning via a double look-ahead mechanism, enabling rapid convergence similar to many-shot settings. Extensive experimental results on benchmark datasets demonstrate the effectiveness of the proposed Meta-AdaM.
|
Meta-AdaM: An Meta-Learned Adaptive Optimizer with Momentum for Few-Shot Learning
|
[
"Siyuan Sun",
"Hongyang Gao"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=d7a5TpePV7
|
@inproceedings{
zhang2023how,
title={How to Fine-tune the Model: Unified Model Shift and Model Bias Policy Optimization},
author={Hai Zhang and Hang Yu and Junqiao Zhao and Di Zhang and Chang Huang and Hongtu Zhou and Xiao Zhang and Chen Ye},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d7a5TpePV7}
}
|
Designing and deriving effective model-based reinforcement learning (MBRL) algorithms with a performance improvement guarantee is challenging, mainly attributed to the high coupling between model learning and policy optimization. Many prior methods that rely on return discrepancy to guide model learning ignore the impacts of model shift, which can lead to performance deterioration due to excessive model updates. Other methods use performance difference bound to explicitly consider model shift. However, these methods rely on a fixed threshold to constrain model shift, resulting in a heavy dependence on the threshold and a lack of adaptability during the training process. In this paper, we theoretically derive an optimization objective that can unify model shift and model bias and then formulate a fine-tuning process. This process adaptively adjusts the model updates to get a performance improvement guarantee while avoiding model overfitting. Based on these, we develop a straightforward algorithm USB-PO (Unified model Shift and model Bias Policy Optimization). Empirical results show that USB-PO achieves state-of-the-art performance on several challenging benchmark tasks.
|
How to Fine-tune the Model: Unified Model Shift and Model Bias Policy Optimization
|
[
"Hai Zhang",
"Hang Yu",
"Junqiao Zhao",
"Di Zhang",
"Chang Huang",
"Hongtu Zhou",
"Xiao Zhang",
"Chen Ye"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=d6LShzSTOP
|
@inproceedings{
xie2023unsupervised,
title={Unsupervised Image Denoising with Score Function},
author={Yutong Xie and Mingze Yuan and Bin Dong and Quanzheng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d6LShzSTOP}
}
|
Though achieving excellent performance in some cases, current unsupervised learning methods for single image denoising usually have constraints in applications. In this paper, we propose a new approach which is more general and applicable to complicated noise models. Utilizing the property of score function, the gradient of logarithmic probability, we define a solving system for denoising. Once the score function of noisy images has been estimated, the denoised result can be obtained through the solving system. Our approach can be applied to multiple noise models, such as the mixture of multiplicative and additive noise combined with structured correlation. Experimental results show that our method is comparable when the noise model is simple, and has good performance in complicated cases where other methods are not applicable or perform poorly.
|
Unsupervised Image Denoising with Score Function
|
[
"Yutong Xie",
"Mingze Yuan",
"Bin Dong",
"Quanzheng Li"
] |
Conference
|
poster
|
2304.08384
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d4f40zJJIS
|
@inproceedings{
fang2023structural,
title={Structural Pruning for Diffusion Models},
author={Gongfan Fang and Xinyin Ma and Xinchao Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d4f40zJJIS}
}
|
Generative modeling has recently undergone remarkable advancements, primarily propelled by the transformative implications of Diffusion Probabilistic Models (DPMs). The impressive capability of these models, however, often entails significant computational overhead during both training and inference. To tackle this challenge, we present Diff-Pruning, an efficient compression method tailored for learning lightweight diffusion models from pre-existing ones, without the need for extensive re-training. The essence of Diff-Pruning is encapsulated in a Taylor expansion over pruned timesteps, a process that disregards non-contributory diffusion steps and ensembles informative gradients to identify important weights. Our empirical assessment, undertaken across several datasets highlights two primary benefits of our proposed method: 1) Efficiency: it enables approximately a 50\% reduction in FLOPs at a mere 10% to 20% of the original training expenditure; 2) Consistency: the pruned diffusion models inherently preserve generative behavior congruent with their pre-trained models.
|
Structural Pruning for Diffusion Models
|
[
"Gongfan Fang",
"Xinyin Ma",
"Xinchao Wang"
] |
Conference
|
poster
|
2305.10924
|
[
"https://github.com/vainf/diff-pruning"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d4X0QWS2Ln
|
@inproceedings{
dong2023towards,
title={Towards Test-Time Refusals via Concept Negation},
author={Peiran Dong and Song Guo and Junxiao Wang and Bingjie WANG and Jiewei Zhang and Ziming Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d4X0QWS2Ln}
}
|
Generative models produce unbounded outputs, necessitating the use of refusal techniques to confine their output space. Employing generative refusals is crucial in upholding the ethical and copyright integrity of synthesized content, particularly when working with widely adopted diffusion models. "Concept negation'' presents a promising paradigm to achieve generative refusals, as it effectively defines and governs the model's output space based on concepts, utilizing natural language interfaces that are readily comprehensible to humans. However, despite the valuable contributions of prior research to the field of concept negation, it still suffers from significant limitations. The existing concept negation methods, which operate based on the composition of score or noise predictions from the diffusion process, are limited to independent concepts (e.g., ``a blonde girl`` without ``glasses``) and fail to consider the interconnected nature of concepts in reality (e.g., ``Mickey mouse eats ice cream`` without ``Disney characters``). Keeping the limitations in mind, we propose a novel framework, called $ProtoRe$, to improve the flexibility of concept negation via test-time negative concept identification along with purification in the feature space. $ProtoRe$ works by incorporating CLIP's language-contrastive knowledge to identify the prototype of negative concepts, extract the negative features from outputs using the prototype as a prompt, and further refine the attention maps by retrieving negative features. Our evaluation on multiple benchmarks shows that $ProtoRe$ outperforms state-of-the-art methods under various settings, in terms of the effectiveness of purification and the fidelity of generative images.
|
Towards Test-Time Refusals via Concept Negation
|
[
"Peiran Dong",
"Song Guo",
"Junxiao Wang",
"Bingjie WANG",
"Jiewei Zhang",
"Ziming Liu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=d47iuwOt3j
|
@inproceedings{
xie2023on,
title={On the Gini-impurity Preservation For Privacy Random Forests},
author={XinRan Xie and Man-Jie Yuan and Xuetong Bai and Wei Gao and Zhi-Hua Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d47iuwOt3j}
}
|
Random forests have been one successful ensemble algorithms in machine learning. Various techniques have been utilized to preserve the privacy of random forests from anonymization, differential privacy, homomorphic encryption, etc., whereas it rarely takes into account some crucial ingredients of learning algorithm. This work presents a new encryption to preserve data's Gini impurity, which plays a crucial role during the construction of random forests. Our basic idea is to modify the structure of binary search tree to store several examples in each node, and encrypt data features by incorporating label and order information. Theoretically, we prove that our scheme preserves the minimum Gini impurity in ciphertexts without decrypting, and present the security guarantee for encryption. For random forests, we encrypt data features based on our Gini-impurity-preserving scheme, and take the homomorphic encryption scheme CKKS to encrypt data labels due to their importance and privacy. We conduct extensive experiments to show the effectiveness, efficiency and security of our proposed method.
|
On the Gini-impurity Preservation For Privacy Random Forests
|
[
"XinRan Xie",
"Man-Jie Yuan",
"Xuetong Bai",
"Wei Gao",
"Zhi-Hua Zhou"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=d2WsCmoITF
|
@inproceedings{
scetbon2023unbalanced,
title={Unbalanced Low-rank Optimal Transport Solvers},
author={Meyer Scetbon and Michal Klein and Giovanni Palla and marco cuturi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d2WsCmoITF}
}
|
The relevance of optimal transport methods to machine learning has long been hindered by two salient limitations.
First, the $O(n^3)$ computational cost of standard sample-based solvers (when used on batches of $n$ samples) is prohibitive.
Second, the mass conservation constraint makes OT solvers too rigid in practice: because they must match \textit{all} points from both measures, their output can be heavily influenced by outliers.
A flurry of recent works in OT has addressed these computational and modelling limitations, but has resulted in two separate strains of methods:
While the computational outlook was much improved by entropic regularization, more recent $O(n)$ linear-time \textit{low-rank} solvers hold the promise to scale up OT further.
On the other hand, modelling rigidities have been eased owing to unbalanced variants of OT, that rely on penalization terms to promote, rather than impose, mass conservation.
The goal of this paper is to merge these two strains, to achieve the promise of \textit{both} versatile/scalable unbalanced/low-rank OT solvers.
We propose custom algorithms to implement these extensions for the linear OT problem and its Fused-Gromov-Wasserstein generalization, and demonstrate their practical relevance to challenging spatial transcriptomics matching problems.
|
Unbalanced Low-rank Optimal Transport Solvers
|
[
"Meyer Scetbon",
"Michal Klein",
"Giovanni Palla",
"marco cuturi"
] |
Conference
|
poster
|
2305.19727
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d1wjMBYbP1
|
@inproceedings{
li2023zeroshot,
title={Zero-Shot Anomaly Detection via Batch Normalization},
author={Aodong Li and Chen Qiu and Marius Kloft and Padhraic Smyth and Maja Rudolph and Stephan Mandt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d1wjMBYbP1}
}
|
Anomaly detection (AD) plays a crucial role in many safety-critical application domains. The challenge of adapting an anomaly detector to drift in the normal data distribution, especially when no training data is available for the "new normal," has led to the development of zero-shot AD techniques. In this paper, we propose a simple yet effective method called Adaptive Centered Representations (ACR) for zero-shot batch-level AD. Our approach trains off-the-shelf deep anomaly detectors (such as deep SVDD) to adapt to a set of inter-related training data distributions in combination with batch normalization, enabling automatic zero-shot generalization for unseen AD tasks. This simple recipe, batch normalization plus meta-training, is a highly effective and versatile tool. Our results demonstrate the first zero-shot AD results for tabular data and outperform existing methods in zero-shot anomaly detection and segmentation on image data from specialized domains.
|
Zero-Shot Anomaly Detection via Batch Normalization
|
[
"Aodong Li",
"Chen Qiu",
"Marius Kloft",
"Padhraic Smyth",
"Maja Rudolph",
"Stephan Mandt"
] |
Conference
|
poster
|
2302.07849
|
[
"https://github.com/aodongli/zero-shot-ad-via-batch-norm"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d1knqWjmNt
|
@inproceedings{
baranwal2023optimality,
title={Optimality of Message-Passing Architectures for Sparse Graphs},
author={Aseem Baranwal and Kimon Fountoulakis and Aukosh Jagannath},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d1knqWjmNt}
}
|
We study the node classification problem on feature-decorated graphs in the sparse setting, i.e., when the expected degree of a node is $O(1)$ in the number of nodes, in the fixed-dimensional asymptotic regime, i.e., the dimension of the feature data is fixed while the number of nodes is large. Such graphs are typically known to be locally tree-like. We introduce a notion of Bayes optimality for node classification tasks, called asymptotic local Bayes optimality, and compute the optimal classifier according to this criterion for a fairly general statistical data model with arbitrary distributions of the node features and edge connectivity. The optimal classifier is implementable using a message-passing graph neural network architecture. We then compute the generalization error of this classifier and compare its performance against existing learning methods theoretically on a well-studied statistical model with naturally identifiable signal-to-noise ratios (SNRs) in the data. We find that the optimal message-passing architecture interpolates between a standard MLP in the regime of low graph signal and a typical convolution in the regime of high graph signal. Furthermore, we prove a corresponding non-asymptotic result.
|
Optimality of Message-Passing Architectures for Sparse Graphs
|
[
"Aseem Baranwal",
"Kimon Fountoulakis",
"Aukosh Jagannath"
] |
Conference
|
poster
|
2305.10391
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d0VItRE2ZH
|
@inproceedings{
xu2023vra,
title={{VRA}: Variational Rectified Activation for Out-of-distribution Detection},
author={Mingyu Xu and Zheng Lian and Bin Liu and Jianhua Tao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d0VItRE2ZH}
}
|
Out-of-distribution (OOD) detection is critical to building reliable machine learning systems in the open world. Researchers have proposed various strategies to reduce model overconfidence on OOD data. Among them, ReAct is a typical and effective technique to deal with model overconfidence, which truncates high activations to increase the gap between in-distribution and OOD. Despite its promising results, is this technique the best choice? To answer this question, we leverage the variational method to find the optimal operation and verify the necessity of suppressing abnormally low and high activations and amplifying intermediate activations in OOD detection, rather than focusing only on high activations like ReAct. This motivates us to propose a novel technique called ``Variational Rectified Activation (VRA)'', which simulates these suppression and amplification operations using piecewise functions. Experimental results on multiple benchmark datasets demonstrate that our method outperforms existing post-hoc strategies. Meanwhile, VRA is compatible with different scoring functions and network architectures. Our code is available at https://github.com/zeroQiaoba/VRA.
|
VRA: Variational Rectified Activation for Out-of-distribution Detection
|
[
"Mingyu Xu",
"Zheng Lian",
"Bin Liu",
"Jianhua Tao"
] |
Conference
|
poster
|
2302.11716
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=d0IEd3VgBh
|
@inproceedings{
heredia2023on,
title={On the Role of Randomization in Adversarially Robust Classification},
author={Lucas Gnecco Heredia and Muni Sreenivas Pydi and Laurent Meunier and benjamin negrevergne and Yann Chevaleyre},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=d0IEd3VgBh}
}
|
Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conflicting findings on the effectiveness of probabilistic classifiers in comparison to deterministic ones. In this paper, we clarify the role of randomization in building adversarially robust classifiers.
Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results.
Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier that outperforms it. Finally, we give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier for many types of commonly used probabilistic classifiers, *i.e.* randomized ensembles and parametric/input noise injection.
|
On the Role of Randomization in Adversarially Robust Classification
|
[
"Lucas Gnecco Heredia",
"Muni Sreenivas Pydi",
"Laurent Meunier",
"benjamin negrevergne",
"Yann Chevaleyre"
] |
Conference
|
spotlight
|
2302.07221
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=czwZnNf60r
|
@inproceedings{
yang2023exploring,
title={Exploring Diverse In-Context Configurations for Image Captioning},
author={Xu Yang and Yongliang Wu and Mingzhuo Yang and Haokun Chen and Xin Geng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=czwZnNf60r}
}
|
After discovering that Language Models (LMs) can be good in-context few-shot learners, numerous strategies have been proposed to optimize in-context sequence configurations. Recently, researchers in Vision-Language (VL) domains also develop their few-shot learners, while they only use the simplest way, \ie, randomly sampling, to configure in-context image-text pairs. In order to explore the effects of varying configurations on VL in-context learning, we devised four strategies for image selection and four for caption assignment to configure in-context image-text pairs for image captioning. Here Image Captioning is used as the case study since it can be seen as the visually-conditioned LM. Our comprehensive experiments yield two counter-intuitive but valuable insights, highlighting the distinct characteristics of VL in-context learning due to multi-modal synergy, as compared to the NLP case. Furthermore, in our exploration of optimal combination strategies, we observed an average performance enhancement of 20.9 in CIDEr scores compared to the baseline. The code is given in https://github.com/yongliang-wu/ExploreCfg.
|
Exploring Diverse In-Context Configurations for Image Captioning
|
[
"Xu Yang",
"Yongliang Wu",
"Mingzhuo Yang",
"Haokun Chen",
"Xin Geng"
] |
Conference
|
poster
|
2305.14800
|
[
"https://github.com/yongliang-wu/explorecfg"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=cxazQGSsQa
|
@inproceedings{
lam2023efficient,
title={Efficient Neural Music Generation},
author={Max W. Y. Lam and Qiao Tian and Tang Li and Zongyu Yin and Siyuan Feng and Ming Tu and Yuliang Ji and Rui Xia and Mingbo Ma and Xuchen Song and Jitong Chen and Yuping Wang and Yuxuan Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cxazQGSsQa}
}
|
Recent progress in music generation has been remarkably advanced by the state-of-the-art MusicLM, which comprises a hierarchy of three LMs, respectively, for semantic, coarse acoustic, and fine acoustic modelings. Yet, sampling with the MusicLM requires processing through these LMs one by one to obtain the fine-grained acoustic tokens, making it computationally expensive and prohibitive for a real-time generation. Efficient music generation with a quality on par with MusicLM remains a significant challenge.
In this paper, we present **M**e**L**o**D**y (**M** for music; **L** for LM; **D** for diffusion), an LM-guided diffusion model that generates music audios of state-of-the-art quality meanwhile reducing 95.7\% to 99.6\% forward passes in MusicLM, respectively, for sampling 10s to 30s music. MeLoDy inherits the highest-level LM from MusicLM for semantic modeling, and applies a novel dual-path diffusion (DPD) model and an audio VAE-GAN to efficiently decode the conditioning semantic tokens into waveform. DPD is proposed to simultaneously model the coarse and fine acoustics by incorporating the semantic information into segments of latents effectively via cross-attention at each denoising step. Our experimental results suggest the superiority of MeLoDy, not only in its practical advantages on sampling speed and infinitely continuable generation, but also in its state-of-the-art musicality, audio quality, and text correlation.
Our samples are available at https://Efficient-MeLoDy.github.io/.
|
Efficient Neural Music Generation
|
[
"Max W. Y. Lam",
"Qiao Tian",
"Tang Li",
"Zongyu Yin",
"Siyuan Feng",
"Ming Tu",
"Yuliang Ji",
"Rui Xia",
"Mingbo Ma",
"Xuchen Song",
"Jitong Chen",
"Yuping Wang",
"Yuxuan Wang"
] |
Conference
|
poster
|
2305.15719
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=cx9a4Xvb3l
|
@inproceedings{
sun2023privacy,
title={Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?},
author={Xiaoxiao Sun and Nidham Gazagnadou and Vivek Sharma and Lingjuan Lyu and Hongdong Li and Liang Zheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cx9a4Xvb3l}
}
|
Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Images determined as overall dissimilar, on the other hand, indicate higher robustness against attack. However, there is no guarantee that these metrics well reflect human opinions, which offers trustworthy judgement for model privacy leakage. In this paper, we comprehensively study the faithfulness of these hand-crafted metrics to human perception of privacy information from the reconstructed images. On 5 datasets ranging from natural images, faces, to fine-grained classes, we use 4 existing attack methods to reconstruct images from many different classification models and, for each reconstructed image, we ask multiple human annotators to assess whether this image is recognizable. Our studies reveal that the hand-crafted metrics only have a weak correlation with the human evaluation of privacy leakage and that even these metrics themselves often contradict each other. These observations suggest risks of current metrics in the community. To address this potential risk, we propose a learning-based measure called SemSim to evaluate the Semantic Similarity between the original and reconstructed images. SemSim is trained with a standard triplet loss, using an original image as an anchor, one of its recognizable reconstructed images as a positive sample, and an unrecognizable one as a negative. By training on human annotations, SemSim exhibits a greater reflection of privacy leakage on the semantic level. We show that SemSim has a significantly higher correlation with human judgment compared with existing metrics. Moreover, this strong correlation generalizes to unseen datasets, models and attack methods. We envision this work as a milestone for image quality evaluation closer to the human level. The project webpage can be accessed at https://sites.google.com/view/semsim.
|
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?
|
[
"Xiaoxiao Sun",
"Nidham Gazagnadou",
"Vivek Sharma",
"Lingjuan Lyu",
"Hongdong Li",
"Liang Zheng"
] |
Conference
|
spotlight
|
2309.13038
|
[
""
] |
https://huggingface.co/papers/2309.13038
| 0 | 0 | 0 | 6 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=cwjh8lqmOL
|
@inproceedings{
yang2023gpttools,
title={{GPT}4Tools: Teaching Large Language Model to Use Tools via Self-instruction},
author={Rui Yang and Lin Song and Yanwei Li and Sijie Zhao and Yixiao Ge and Xiu Li and Ying Shan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cwjh8lqmOL}
}
|
This paper aims to efficiently enable Large Language Models (LLMs) to use multi-modal tools.
The advanced proprietary LLMs, such as ChatGPT and GPT-4, have shown great potential for tool usage through sophisticated prompt engineering.
Nevertheless, these models typically rely on prohibitive computational costs and publicly inaccessible data.
To address these challenges, we propose the GPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA and OPT, to use tools.
It generates an instruction-following dataset by prompting an advanced teacher with various multi-modal contexts.
By using the Low-Rank Adaptation (LoRA) optimization, our approach facilitates the open-source LLMs to solve a range of visual problems, including visual comprehension and image generation.
Moreover, we provide a benchmark to evaluate the ability of LLMs to use tools, which is performed in both zero-shot and fine-tuning ways.
Extensive experiments demonstrate the effectiveness of our method on various language models, which not only significantly improves the accuracy of invoking seen tools, but also enables the zero-shot capacity for unseen tools.
|
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction
|
[
"Rui Yang",
"Lin Song",
"Yanwei Li",
"Sijie Zhao",
"Yixiao Ge",
"Xiu Li",
"Ying Shan"
] |
Conference
|
poster
|
2305.18752
|
[
"https://github.com/stevengrove/gpt4tools"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=cwBeRBe9hq
|
@inproceedings{
raman2023on,
title={On the Learnability of Multilabel Ranking},
author={Vinod Raman and UNIQUE SUBEDI and Ambuj Tewari},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cwBeRBe9hq}
}
|
Multilabel ranking is a central task in machine learning. However, the most fundamental question of learnability in a multilabel ranking setting with relevance-score feedback remains unanswered. In this work, we characterize the learnability of multilabel ranking problems in both batch and online settings for a large family of ranking losses. Along the way, we give two equivalence classes of ranking losses based on learnability that capture most losses used in practice.
|
On the Learnability of Multilabel Ranking
|
[
"Vinod Raman",
"UNIQUE SUBEDI",
"Ambuj Tewari"
] |
Conference
|
spotlight
|
2304.03337
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=cto6jIIbMZ
|
@inproceedings{
nguyen2023demystifying,
title={Demystifying Softmax Gating Function in Gaussian Mixture of Experts},
author={Huy Nguyen and TrungTin Nguyen and Nhat Ho},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cto6jIIbMZ}
}
|
Understanding the parameter estimation of softmax gating Gaussian mixture of experts has remained a long-standing open problem in the literature. It is mainly due to three fundamental theoretical challenges associated with the softmax gating function: (i) the identifiability only up to the translation of parameters; (ii) the intrinsic interaction via partial differential equations between the softmax gating and the expert functions in the Gaussian density; (iii) the complex dependence between the numerator and denominator of the conditional density of softmax gating Gaussian mixture of experts. We resolve these challenges by proposing novel Voronoi loss functions among parameters and establishing the convergence rates of maximum likelihood estimator (MLE) for solving parameter estimation in these models. When the true number of experts is unknown and over-specified, our findings show a connection between the convergence rate of the MLE and a solvability problem of a system of polynomial equations.
|
Demystifying Softmax Gating Function in Gaussian Mixture of Experts
|
[
"Huy Nguyen",
"TrungTin Nguyen",
"Nhat Ho"
] |
Conference
|
spotlight
|
2305.03288
|
[
""
] |
https://huggingface.co/papers/2305.03288
| 0 | 1 | 0 | 3 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=cslnCXE9XA
|
@inproceedings{
yan2023counterfactual,
title={Counterfactual Generation with Identifiability Guarantees},
author={Hanqi Yan and Lingjing Kong and Lin Gui and Yuejie Chi and Eric Xing and Yulan He and Kun Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cslnCXE9XA}
}
|
Counterfactual generation lies at the core of various machine learning tasks, including image translation and controllable text generation. This generation process usually requires the identification of the disentangled latent representations, such as content and style, that underlie the observed data. However, it becomes more challenging when faced with a scarcity of paired data and labelling information. Existing disentangled methods crucially rely on oversimplified assumptions, such as assuming independent content and style variables, to identify the latent variables, even though such assumptions may not hold for complex data distributions. For instance, food reviews tend to involve words like “tasty”, whereas movie reviews commonly contain words such as “thrilling” for the same positive sentiment. This problem is exacerbated when data are sampled from multiple domains since the dependence between content and style may vary significantly over domains. In this work, we tackle the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task. We provide identification guarantees for such latent-variable models by leveraging the relative sparsity of the influences from different latent variables. Our theoretical insights enable the development of a doMain AdapTive counTerfactual gEneration model, called (MATTE). Our theoretically grounded framework achieves state-of-the-art performance in unsupervised style transfer tasks, where neither paired data nor style labels are utilized, across four large-scale datasets.
|
Counterfactual Generation with Identifiability Guarantees
|
[
"Hanqi Yan",
"Lingjing Kong",
"Lin Gui",
"Yuejie Chi",
"Eric Xing",
"Yulan He",
"Kun Zhang"
] |
Conference
|
poster
|
2402.15309
|
[
"https://github.com/hanqi-qi/matte"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.