bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
listlengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
listlengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
listlengths 0
100
| Datasets
listlengths 0
11
| Spaces
listlengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
https://openreview.net/forum?id=htM8yp2EwX
|
@inproceedings{
ding2023amdp,
title={{AMDP}: An Adaptive Detection Procedure for False Discovery Rate Control in High-Dimensional Mediation Analysis},
author={Jiarong Ding and Xuehu Zhu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=htM8yp2EwX}
}
|
High-dimensional mediation analysis is often associated with a multiple testing problem for detecting significant mediators. Assessing the uncertainty of this detecting process via false discovery rate (FDR) has garnered great interest. To control the FDR in multiple testing, two essential steps are involved: ranking and selection. Existing approaches either construct p-values without calibration or disregard the joint information across tests, leading to conservation in FDR control or non-optimal ranking rules for multiple hypotheses. In this paper, we develop an adaptive mediation detection procedure (referred to as "AMDP") to identify relevant mediators while asymptotically controlling the FDR in high-dimensional mediation analysis. AMDP produces the optimal rule for ranking hypotheses and proposes a data-driven strategy to determine the threshold for mediator selection. This novel method captures information from the proportions of composite null hypotheses and the distribution of p-values, which turns the high dimensionality into an advantage instead of a limitation. The numerical studies on synthetic and real data sets illustrate the performances of AMDP compared with existing approaches.
|
AMDP: An Adaptive Detection Procedure for False Discovery Rate Control in High-Dimensional Mediation Analysis
|
[
"Jiarong Ding",
"Xuehu Zhu"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hrkmlPhp1u
|
@inproceedings{
zhao2023unipc,
title={Uni{PC}: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models},
author={Wenliang Zhao and Lujia Bai and Yongming Rao and Jie Zhou and Jiwen Lu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hrkmlPhp1u}
}
|
Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., $<$10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256$\times$256 (conditional) with only 10 function evaluations. Code is available at https://github.com/wl-zhao/UniPC.
|
UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models
|
[
"Wenliang Zhao",
"Lujia Bai",
"Yongming Rao",
"Jie Zhou",
"Jiwen Lu"
] |
Conference
|
poster
|
2302.04867
|
[
"https://github.com/wl-zhao/unipc"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hpYb4eUinX
|
@inproceedings{
tian2023boosting,
title={Boosting Verification of Deep Reinforcement Learning via Piece-Wise Linear Decision Neural Networks},
author={Jiaxu Tian and Dapeng Zhi and Si Liu and Peixin Wang and Cheng Chen and Min Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hpYb4eUinX}
}
|
Formally verifying deep reinforcement learning (DRL) systems suffers from both inaccurate verification results and limited scalability. The major obstacle lies in the large overestimation introduced inherently during training and then transforming the inexplicable decision-making models, i.e., deep neural networks (DNNs), into easy-to-verify models. In this paper, we propose an inverse transform-then-train approach, which first encodes a DNN into an equivalent set of efficiently and tightly verifiable linear control policies and then optimizes them via reinforcement learning. We accompany our inverse approach with a novel neural network model called piece-wise linear decision neural networks (PLDNNs), which are compatible with most existing DRL training algorithms with comparable performance against conventional DNNs. Our extensive experiments show that, compared to DNN-based DRL systems, PLDNN-based systems can be more efficiently and tightly verified with up to $438$ times speedup and a significant reduction in overestimation. In particular, even a complex $12$-dimensional DRL system is efficiently verified with up to 7 times deeper computation steps.
|
Boosting Verification of Deep Reinforcement Learning via Piece-Wise Linear Decision Neural Networks
|
[
"Jiaxu Tian",
"Dapeng Zhi",
"Si Liu",
"Peixin Wang",
"Cheng Chen",
"Min Zhang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hoyL1Ypjoo
|
@inproceedings{
shi2023macro,
title={Macro Placement by Wire-Mask-Guided Black-Box Optimization},
author={Yunqi Shi and Ke Xue and Lei Song and Chao Qian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hoyL1Ypjoo}
}
|
The development of very large-scale integration (VLSI) technology has posed new challenges for electronic design automation (EDA) techniques in chip floorplanning. During this process, macro placement is an important subproblem, which tries to determine the positions of all macros with the aim of minimizing half-perimeter wirelength (HPWL) and avoiding overlapping. Previous methods include packing-based, analytical and reinforcement learning methods. In this paper, we propose a new black-box optimization (BBO) framework (called WireMask-BBO) for macro placement, by using a wire-mask-guided greedy procedure for objective evaluation. Equipped with different BBO algorithms, WireMask-BBO empirically achieves significant improvements over previous methods, i.e., achieves significantly shorter HPWL by using much less time. Furthermore, it can fine-tune existing placements by treating them as initial solutions, which can bring up to 50% improvement in HPWL. WireMask-BBO has the potential to significantly improve the quality and efficiency of chip floorplanning, which makes it appealing to researchers and practitioners in EDA and will also promote the application of BBO. Our code is available at https://github.com/lamda-bbo/WireMask-BBO.
|
Macro Placement by Wire-Mask-Guided Black-Box Optimization
|
[
"Yunqi Shi",
"Ke Xue",
"Lei Song",
"Chao Qian"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hn1oJO7lg6
|
@inproceedings{
padmanabhan2023computing,
title={Computing Approximate \${\textbackslash}ell\_p\$ Sensitivities},
author={Swati Padmanabhan and David Woodruff and Qiuyi Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hn1oJO7lg6}
}
|
Recent works in dimensionality reduction for regression tasks have introduced the notion of sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering provable guarantees on the quality of the approximation after removing low-sensitivity datapoints via subsampling. However, fast algorithms for approximating sensitivities, which we show is equivalent to approximate regression, are known for only the $\ell_2$ setting, in which they are popularly termed leverage scores. In this work, we provide the first efficient algorithms for approximating $\ell_p$ sensitivities and other summary statistics of a given matrix. In particular, for a given $n \times d$ matrix, we compute $\alpha$-approximation to its $\ell_1$ sensitivities at the cost of $n/\alpha$ sensitivity computations. For estimating the total $\ell_p$ sensitivity (i.e. the sum of $\ell_p$ sensitivities), we provide an algorithm based on importance sampling of $\ell_p$ Lewis weights, which computes a constant factor approximation at the cost of roughly $\sqrt{d}$ sensitivity computations, with no polynomial dependence on $n$. Furthermore, we estimate the maximum $\ell_1$ sensitivity up to a $\sqrt{d}$ factor in $O(d)$ sensitivity computations. We also generalize these results to $\ell_p$ norms. Lastly, we experimentally show that for a wide class of structured matrices in real-world datasets, the total sensitivity can be quickly approximated and is significantly smaller than the theoretical prediction, demonstrating that real-world datasets have on average low intrinsic effective dimensionality.
|
Computing Approximate ℓ_p Sensitivities
|
[
"Swati Padmanabhan",
"David Woodruff",
"Qiuyi Zhang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hlkhPdhuAO
|
@inproceedings{
zhang2023globalcorrelated,
title={Global-correlated 3D-decoupling Transformer for Clothed Avatar Reconstruction},
author={Zechuan Zhang and Li Sun and Zongxin Yang and Ling Chen and Yi Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hlkhPdhuAO}
}
|
Reconstructing 3D clothed human avatars from single images is a challenging task, especially when encountering complex poses and loose clothing. Current methods exhibit limitations in performance, largely attributable to their dependence on insufficient 2D image features and inconsistent query methods. Owing to this, we present the Global-correlated 3D-decoupling Transformer for clothed Avatar reconstruction (GTA), a novel transformer-based architecture that reconstructs clothed human avatars from monocular images. Our approach leverages transformer architectures by utilizing a Vision Transformer model as an encoder for capturing global-correlated image features. Subsequently, our innovative 3D-decoupling decoder employs cross-attention to decouple tri-plane features, using learnable embeddings as queries for cross-plane generation. To effectively enhance feature fusion with the tri-plane 3D feature and human body prior, we propose a hybrid prior fusion strategy combining spatial and prior-enhanced queries, leveraging the benefits of spatial localization and human body prior knowledge. Comprehensive experiments on CAPE and THuman2.0 datasets illustrate that our method outperforms state-of-the-art approaches in both geometry and texture reconstruction, exhibiting high robustness to challenging poses and loose clothing, and producing higher-resolution textures. Codes are available at https://github.com/River-Zhang/GTA.
|
Global-correlated 3D-decoupling Transformer for Clothed Avatar Reconstruction
|
[
"Zechuan Zhang",
"Li Sun",
"Zongxin Yang",
"Ling Chen",
"Yi Yang"
] |
Conference
|
poster
|
2309.13524
|
[
"https://github.com/river-zhang/gta"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hkPn7M9k1W
|
@inproceedings{
nowak2023fantastic,
title={Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training},
author={Aleksandra Nowak and Bram Grooten and Decebal Constantin Mocanu and Jacek Tabor},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hkPn7M9k1W}
}
|
Dynamic Sparse Training (DST) is a rapidly evolving area of research that seeks to optimize the sparse initialization of a neural network by adapting its topology during training. It has been shown that under specific conditions, DST is able to outperform dense models. The key components of this framework are the pruning and growing criteria, which are repeatedly applied during the training process to adjust the network’s sparse connectivity. While the growing criterion's impact on DST performance is relatively well studied, the influence of the pruning criterion remains overlooked. To address this issue, we design and perform an extensive empirical analysis of various pruning criteria to better understand their impact on the dynamics of DST solutions. Surprisingly, we find that most of the studied methods yield similar results. The differences become more significant in the low-density regime, where the best performance is predominantly given by the simplest technique: magnitude-based pruning.
|
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training
|
[
"Aleksandra Nowak",
"Bram Grooten",
"Decebal Constantin Mocanu",
"Jacek Tabor"
] |
Conference
|
poster
|
2306.12230
|
[
"https://github.com/alooow/fantastic_weights_paper"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hiwF7aG1dt
|
@inproceedings{
fu2023iteratively,
title={Iteratively Learn Diverse Strategies with State Distance Information},
author={Wei Fu and Weihua Du and Jingwei Li and Sunli Chen and Jingzhao Zhang and Yi Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hiwF7aG1dt}
}
|
In complex reinforcement learning (RL) problems, policies with similar rewards may have substantially different behaviors. It remains a fundamental challenge to optimize rewards while also discovering as many *diverse* strategies as possible, which can be crucial in many practical applications. Our study examines two design choices for tackling this challenge, i.e., *diversity measure* and *computation framework*. First, we find that with existing diversity measures, visually indistinguishable policies can still yield high diversity scores. To accurately capture the behavioral difference, we propose to incorporate the state-space distance information into the diversity measure. In addition, we examine two common computation frameworks for this problem, i.e., population-based training (PBT) and iterative learning (ITR). We show that although PBT is the precise problem formulation, ITR can achieve comparable diversity scores with higher computation efficiency, leading to improved solution quality in practice. Based on our analysis, we further combine ITR with two tractable realizations of the state-distance-based diversity measures and develop a novel diversity-driven RL algorithm, *State-based Intrinsic-reward Policy Optimization* (SIPO), with provable convergence properties. We empirically examine SIPO across three domains from robot locomotion to multi-agent games. In all of our testing environments, SIPO consistently produces strategically diverse and human-interpretable policies that cannot be discovered by existing baselines.
|
Iteratively Learn Diverse Strategies with State Distance Information
|
[
"Wei Fu",
"Weihua Du",
"Jingwei Li",
"Sunli Chen",
"Jingzhao Zhang",
"Yi Wu"
] |
Conference
|
poster
|
2310.14509
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hiQG8qGxso
|
@inproceedings{
jain2023testtime,
title={Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification},
author={Kanishk Jain and Shyamgopal Karthik and Vineet Gandhi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hiQG8qGxso}
}
|
We investigate the problem of reducing mistake severity for fine-grained classification. Fine-grained classification can be challenging, mainly due to the requirement of knowledge or domain expertise for accurate annotation. However, humans are particularly adept at performing coarse classification as it requires relatively low levels of expertise. To this end, we present a novel approach for Post-Hoc Correction called Hierarchical Ensembles (HiE) that utilizes label hierarchy to improve the performance of fine-grained classification at test-time using the coarse-grained predictions. By only requiring the parents of leaf nodes, our method significantly reduces avg. mistake severity while improving top-1 accuracy on the iNaturalist-19 and tieredImageNet-H datasets, achieving a new state-of-the-art on both benchmarks. We also investigate the efficacy of our approach in the semi-supervised setting. Our approach brings notable gains in top-1 accuracy while significantly decreasing the severity of mistakes as training data decreases for the fine-grained classes. The simplicity and post-hoc nature of HiE renders it practical to be used with any off-the-shelf trained model to improve its predictions further.
|
Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification
|
[
"Kanishk Jain",
"Shyamgopal Karthik",
"Vineet Gandhi"
] |
Conference
|
poster
|
2302.00368
|
[
"https://github.com/kanji95/Hierarchical-Ensembles"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hiOUySN0ub
|
@inproceedings{
yi2023learning,
title={Learning Topology-Agnostic {EEG} Representations with Geometry-Aware Modeling},
author={Ke Yi and Yansen Wang and Kan Ren and Dongsheng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hiOUySN0ub}
}
|
Large-scale pre-training has shown great potential to enhance models on downstream tasks in vision and language. Developing similar techniques for scalp electroencephalogram (EEG) is suitable since unlabelled data is plentiful. Meanwhile, various sampling channel selections and inherent structural and spatial information bring challenges and avenues to improve existing pre-training strategies further. In order to break boundaries between different EEG resources and facilitate cross-dataset EEG pre-training, we propose to map all kinds of channel selections to a unified topology. We further introduce MMM, a pre-training framework with Multi-dimensional position encoding, Multi-level channel hierarchy, and Multi-stage pre-training strategy built on the unified topology to obtain topology-agnostic representations. Experiments demonstrate that our approach yields impressive improvements over previous state-of-the-art techniques on emotional recognition benchmark datasets.
|
Learning Topology-Agnostic EEG Representations with Geometry-Aware Modeling
|
[
"Ke Yi",
"Yansen Wang",
"Kan Ren",
"Dongsheng Li"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hgLMht2Z3L
|
@inproceedings{
zhu2023path,
title={Path following algorithms for \${\textbackslash}ell\_2\$-regularized \$M\$-estimation with approximation guarantee},
author={Yunzhang Zhu and Renxiong Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hgLMht2Z3L}
}
|
Many modern machine learning algorithms are formulated as regularized M-estimation problems, in which a regularization (tuning) parameter controls a trade-off between model fit to the training data and model complexity. To select the ``best'' tuning parameter value that achieves a good trade-off, an approximated solution path needs to be computed. In practice, this is often done through selecting a grid of tuning parameter values and solving the regularized problem at the selected grid points. However, given any desired level of accuracy, it is often not clear how to choose the grid points and also how accurately one should solve the regularized problems at the selected gird points, both of which can greatly impact the overall amount of computation. In the context of $\ell_2$-regularized $M$-estimation problem, we propose a novel grid point selection scheme and an adaptive stopping criterion for any given optimization algorithm that produces an approximated solution path with approximation error guarantee. Theoretically, we prove that the proposed solution path can approximate the exact solution path to arbitrary level of accuracy, while saving the overall computation as much as possible. Numerical results also corroborate with our theoretical analysis.
|
Path following algorithms for ℓ_2-regularized M-estimation with approximation guarantee
|
[
"Yunzhang Zhu",
"Renxiong Liu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hcXDbbzgoh
|
@inproceedings{
garibbo2023taylor,
title={Taylor {TD}-learning},
author={Michele Garibbo and Maxime Robeyns and Laurence Aitchison},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hcXDbbzgoh}
}
|
Many reinforcement learning approaches rely on temporal-difference (TD) learning to learn a critic.
However, TD-learning updates can be high variance.
Here, we introduce a model-based RL framework, Taylor TD, which reduces this variance in continuous state-action settings.
Taylor TD uses a first-order Taylor series expansion of TD updates.
This expansion allows Taylor TD to analytically integrate over stochasticity in the action-choice, and some stochasticity in the state distribution for the initial state and action of each TD update.
We include theoretical and empirical evidence that Taylor TD updates are indeed lower variance than standard TD updates.
Additionally, we show Taylor TD has the same stable learning guarantees as standard TD-learning with linear function approximation under a reasonable assumption.
Next, we combine Taylor TD with the TD3 algorithm, forming TaTD3.
We show TaTD3 performs as well, if not better, than several state-of-the art model-free and model-based baseline algorithms on a set of standard benchmark tasks.
|
Taylor TD-learning
|
[
"Michele Garibbo",
"Maxime Robeyns",
"Laurence Aitchison"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=haniyY7zm1
|
@inproceedings{
azizian2023exact,
title={Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Models},
author={Wa{\"\i}ss Azizian and Franck Iutzeler and J{\'e}r{\^o}me Malick},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=haniyY7zm1}
}
|
Wasserstein distributionally robust estimators have emerged as powerful models for prediction and decision-making under uncertainty. These estimators provide attractive generalization guarantees: the robust objective obtained from the training distribution is an exact upper bound on the true risk with high probability. However, existing guarantees either suffer from the curse of dimensionality, are restricted to specific settings, or lead to spurious error terms. In this paper, we show that these generalization guarantees actually hold on general classes of models, do not suffer from the curse of dimensionality, and can even cover distribution shifts at testing. We also prove that these results carry over to the newly-introduced regularized versions of Wasserstein distributionally robust problems.
|
Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Models
|
[
"Waïss Azizian",
"Franck Iutzeler",
"Jérôme Malick"
] |
Conference
|
poster
|
2305.17076
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=haHIji0yFt
|
@inproceedings{
xu2023se,
title={\${SE}(3)\$ Equivariant Convolution and Transformer in Ray Space},
author={Yinshuang Xu and Jiahui Lei and Kostas Daniilidis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=haHIji0yFt}
}
|
3D reconstruction and novel view rendering can greatly benefit from geometric priors when the input views are not sufficient in terms of coverage and inter-view baselines. Deep learning of geometric priors from 2D images requires each image to be represented in a $2D$ canonical frame and the prior to be learned in a given or learned $3D$ canonical frame. In this paper, given only the relative poses of the cameras, we show how to learn priors from multiple views equivariant to coordinate frame transformations by proposing an $SE(3)$-equivariant convolution and transformer in the space of rays in 3D. We model the ray space as a homogeneous space of $SE(3)$ and introduce the $SE(3)$-equivariant convolution in ray space. Depending on the output domain of the convolution, we present convolution-based $SE(3)$-equivariant maps from ray space to ray space and to $\mathbb{R}^3$. Our mathematical framework allows us to go beyond convolution to $SE(3)$-equivariant attention in the ray space. We showcase how to tailor and adapt the equivariant convolution and transformer in the tasks of equivariant $3D$ reconstruction and equivariant neural rendering from multiple views. We demonstrate $SE(3)$-equivariance by obtaining robust results in roto-translated datasets without performing transformation augmentation.
|
SE(3) Equivariant Convolution and Transformer in Ray Space
|
[
"Yinshuang Xu",
"Jiahui Lei",
"Kostas Daniilidis"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hXevuspQnX
|
@inproceedings{
ren2023insactor,
title={InsActor: Instruction-driven Physics-based Characters},
author={Jiawei Ren and Mingyuan Zhang and Cunjun Yu and Xiao Ma and Liang Pan and Ziwei Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hXevuspQnX}
}
|
Generating animation of physics-based characters with intuitive control has long been a desirable task with numerous applications. However, generating physically simulated animations that reflect high-level human instructions remains a difficult problem due to the complexity of physical environments and the richness of human language.
In this paper, we present $\textbf{InsActor}$, a principled generative framework that leverages recent advancements in diffusion-based human motion models to produce instruction-driven animations of physics-based characters.
Our framework empowers InsActor to capture complex relationships between high-level human instructions and character motions by employing diffusion policies for flexibly conditioned motion planning.
To overcome invalid states and infeasible state transitions in planned motions, InsActor discovers low-level skills and maps plans to latent skill sequences in a compact latent space.
Extensive experiments demonstrate that InsActor achieves state-of-the-art results on various tasks, including instruction-driven motion generation and instruction-driven waypoint heading. Notably, the ability of InsActor to generate physically simulated animations using high-level human instructions makes it a valuable tool, particularly in executing long-horizon tasks with a rich set of instructions. Our project page is available at [jiawei-ren.github.io/projects/insactor/index.html](https://jiawei-ren.github.io/projects/insactor/index.html)
|
InsActor: Instruction-driven Physics-based Characters
|
[
"Jiawei Ren",
"Mingyuan Zhang",
"Cunjun Yu",
"Xiao Ma",
"Liang Pan",
"Ziwei Liu"
] |
Conference
|
poster
|
2312.17135
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hWPNYWkYPN
|
@inproceedings{
du2023a,
title={A new perspective on building efficient and expressive 3D equivariant graph neural networks},
author={weitao Du and Yuanqi Du and Limei Wang and Dieqiao Feng and Guifeng Wang and Shuiwang Ji and Carla P Gomes and Zhi-Ming Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hWPNYWkYPN}
}
|
Geometric deep learning enables the encoding of physical symmetries in modeling 3D objects. Despite rapid progress in encoding 3D symmetries into Graph Neural Networks (GNNs), a comprehensive evaluation of the expressiveness of these network architectures through a local-to-global analysis lacks today. In this paper, we propose a local hierarchy of 3D isomorphism to evaluate the expressive power of equivariant GNNs and investigate the process of representing global geometric information from local patches. Our work leads to two crucial modules for designing expressive and efficient geometric GNNs; namely local substructure encoding (\textbf{LSE}) and frame transition encoding (\textbf{FTE}). To demonstrate the applicability of our theory, we propose LEFTNet which effectively implements these modules and achieves state-of-the-art performance on both scalar-valued and vector-valued molecular property prediction tasks. We further point out future design space for 3D equivariant graph neural networks. Our codes are available at \url{https://github.com/yuanqidu/LeftNet}.
|
A new perspective on building efficient and expressive 3D equivariant graph neural networks
|
[
"weitao Du",
"Yuanqi Du",
"Limei Wang",
"Dieqiao Feng",
"Guifeng Wang",
"Shuiwang Ji",
"Carla P Gomes",
"Zhi-Ming Ma"
] |
Conference
|
poster
|
2304.04757
|
[
"https://github.com/yuanqidu/leftnet"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hVVp8TXIPs
|
@inproceedings{
wasim2023hardware,
title={Hardware Resilience Properties of Text-Guided Image Classifiers},
author={Syed Talal Wasim and Kabila Haile Soboka and Abdulrahman Mahmoud and Salman Khan and David Brooks and Gu-Yeon Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hVVp8TXIPs}
}
|
This paper presents a novel method to enhance the reliability of image classification models during deployment in the face of transient hardware errors. By utilizing enriched text embeddings derived from GPT-3 with question prompts per class and CLIP pretrained text encoder, we investigate their impact as an initialization for the classification layer. Our approach achieves a remarkable $5.5\times$ average increase in hardware reliability (and up to $14\times$) across various architectures in the most critical layer, with minimal accuracy drop ($0.3\%$ on average) compared to baseline PyTorch models. Furthermore, our method seamlessly integrates with any image classification backbone, showcases results across various network architectures, decreases parameter and FLOPs overhead, and follows a consistent training recipe. This research offers a practical and efficient solution to bolster the robustness of image classification models against hardware failures, with potential implications for future studies in this domain. Our code and models are released at https://github.com/TalalWasim/TextGuidedResilience.
|
Hardware Resilience Properties of Text-Guided Image Classifiers
|
[
"Syed Talal Wasim",
"Kabila Haile Soboka",
"Abdulrahman Mahmoud",
"Salman Khan",
"David Brooks",
"Gu-Yeon Wei"
] |
Conference
|
poster
|
2311.14062
|
[
"https://github.com/talalwasim/textguidedresilience"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hVAla2O73O
|
@inproceedings{
ahmed2023a,
title={A Pseudo-Semantic Loss for Autoregressive Models with Logical Constraints},
author={Kareem Ahmed and Kai-Wei Chang and Guy Van den Broeck},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hVAla2O73O}
}
|
Neuro-symbolic AI bridges the gap between purely symbolic and neural approaches to learning. This often requires maximizing the likelihood of a symbolic constraint w.r.t the neural network's output distribution. Such output distributions are typically assumed to be fully-factorized. This limits the applicability of neuro-symbolic learning to the more expressive auto-regressive distributions, e.g., transformers. Under such distributions, computing the likelihood of even simple constraints is #P-hard. Instead of attempting to enforce the constraint on the entire likelihood distribution, we propose to do so on a random, local approximation thereof. More precisely, we approximate the likelihood of the constraint with the pseudolikelihood of the constraint centered around a model sample. Our approach is factorizable, allowing us to reuse solutions to sub-problems---a main tenet for the efficient computation of neuro-symbolic losses. It also provides a local, high fidelity approximation of the likelihood: it exhibits low entropy and KL-divergence around the model sample. We tested our approach on Sudoku and shortest-path prediction cast as auto-regressive generation, and observe that we greatly improve upon the base model's ability to predict logically-consistent outputs. We also tested our approach on the task of detoxifying large language models. We observe that using a simple constraint disallowing a list of toxic words, we are able to steer the model's outputs away from toxic generations, achieving SoTA compared to previous approaches.
|
A Pseudo-Semantic Loss for Autoregressive Models with Logical Constraints
|
[
"Kareem Ahmed",
"Kai-Wei Chang",
"Guy Van den Broeck"
] |
Conference
|
poster
|
2312.03905
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hV52oj0Sik
|
@inproceedings{
wu2023a,
title={A Hierarchical Training Paradigm for Antibody Structure-sequence Co-design},
author={Fang Wu and Stan Z. Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hV52oj0Sik}
}
|
Therapeutic antibodies are an essential and rapidly flourishing drug modality. The binding specificity between antibodies and antigens is decided by complementarity-determining regions (CDRs) at the tips of these Y-shaped proteins. In this paper, we propose a \textbf{h}ierarchical \textbf{t}raining \textbf{p}aradigm (HTP) for the antibody sequence-structure co-design. HTP consists of four levels of training stages, each corresponding to a specific protein modality within a particular protein domain. Through carefully crafted tasks in different stages, HTP seamlessly and effectively integrates geometric graph neural networks (GNNs) with large-scale protein language models to excavate evolutionary information from not only geometric structures but also vast antibody and non-antibody sequence databases, which determines ligand binding pose and strength. Empirical experiments show HTP sets the new state-of-the-art performance in the co-design problem as well as the fix-backbone design. Our research offers a hopeful path to unleash the potential of deep generative architectures and seeks to illuminate the way forward for the antibody sequence and structure co-design challenge.
|
A Hierarchical Training Paradigm for Antibody Structure-sequence Co-design
|
[
"Fang Wu",
"Stan Z. Li"
] |
Conference
|
poster
|
2311.16126
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hSkEcIFi3o
|
@inproceedings{
li2023adversarial,
title={Adversarial Examples Are Not Real Features},
author={Ang Li and Yifei Wang and Yiwen Guo and Yisen Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hSkEcIFi3o}
}
|
The existence of adversarial examples has been a mystery for years and attracted much interest. A well-known theory by \citet{ilyas2019adversarial} explains adversarial vulnerability from a data perspective by showing that one can extract non-robust features from adversarial examples and these features alone are useful for classification. However, the explanation remains quite counter-intuitive since non-robust features are mostly noise features to humans. In this paper, we re-examine the theory from a larger context by incorporating multiple learning paradigms. Notably, we find that contrary to their good usefulness under supervised learning, non-robust features attain poor usefulness when transferred to other self-supervised learning paradigms, such as contrastive learning, masked image modeling, and diffusion models. It reveals that non-robust features are not really as useful as robust or natural features that enjoy good transferability between these paradigms. Meanwhile, for robustness, we also show that naturally trained encoders from robust features are largely non-robust under AutoAttack. Our cross-paradigm examination suggests that the non-robust features are not really useful but more like paradigm-wise shortcuts, and robust features alone might be insufficient to attain reliable model robustness. Code is available at \url{https://github.com/PKU-ML/AdvNotRealFeatures}.
|
Adversarial Examples Are Not Real Features
|
[
"Ang Li",
"Yifei Wang",
"Yiwen Guo",
"Yisen Wang"
] |
Conference
|
poster
|
2310.18936
|
[
"https://github.com/pku-ml/advnotrealfeatures"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hSTaTBIUCj
|
@inproceedings{
wu2023imagine,
title={Imagine That! Abstract-to-Intricate Text-to-Image Synthesis with Scene Graph Hallucination Diffusion},
author={Shengqiong Wu and Hao Fei and Hanwang Zhang and Tat-Seng Chua},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hSTaTBIUCj}
}
|
In this work, we investigate the task of text-to-image (T2I) synthesis under the abstract-to-intricate setting, i.e., generating intricate visual content from simple abstract text prompts. Inspired by human imagination intuition, we propose a novel scene-graph hallucination (SGH) mechanism for effective abstract-to-intricate T2I synthesis. SGH carries out scene hallucination by expanding the initial scene graph (SG) of the input prompt with more feasible specific scene structures, in which the structured semantic representation of SG ensures high controllability of the intrinsic scene imagination. To approach the T2I synthesis, we deliberately build an SG-based hallucination diffusion system. First, we implement the SGH module based on the discrete diffusion technique, which evolves the SG structure by iteratively adding new scene elements. Then, we utilize another continuous-state diffusion model as the T2I synthesizer, where the overt image-generating process is navigated by the underlying semantic scene structure induced from the SGH module. On the benchmark COCO dataset, our system outperforms the existing best-performing T2I model by a significant margin, especially improving on the abstract-to-intricate T2I generation. Further in-depth analyses reveal how our methods advance.
|
Imagine That! Abstract-to-Intricate Text-to-Image Synthesis with Scene Graph Hallucination Diffusion
|
[
"Shengqiong Wu",
"Hao Fei",
"Hanwang Zhang",
"Tat-Seng Chua"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hOOOvOMok5
|
@inproceedings{
zheng2023rubiks,
title={Rubik's Cube: High-Order Channel Interactions with a Hierarchical Receptive Field},
author={Naishan Zheng and Man Zhou and Chong Zhou and Chen Change Loy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hOOOvOMok5}
}
|
Image restoration techniques, spanning from the convolution to the transformer paradigm, have demonstrated robust spatial representation capabilities to deliver high-quality performance.Yet, many of these methods, such as convolution and the Feed Forward Network (FFN) structure of transformers, primarily leverage the basic first-order channel interactions and have not maximized the potential benefits of higher-order modeling. To address this limitation, our research dives into understanding relationships within the channel dimension and introduces a simple yet efficient, high-order channel-wise operator tailored for image restoration. Instead of merely mimicking high-order spatial interaction, our approach offers several added benefits: Efficiency: It adheres to the zero-FLOP and zero-parameter principle, using a spatial-shifting mechanism across channel-wise groups. Simplicity: It turns the favorable channel interaction and aggregation capabilities into element-wise multiplications and convolution units with $1 \times 1$ kernel. Our new formulation expands the first-order channel-wise interactions seen in previous works to arbitrary high orders, generating a hierarchical receptive field akin to a Rubik's cube through the combined action of shifting and interactions. Furthermore, our proposed Rubik's cube convolution is a flexible operator that can be incorporated into existing image restoration networks, serving as a drop-in replacement for the standard convolution unit with fewer parameters overhead. We conducted experiments across various low-level vision tasks, including image denoising, low-light image enhancement, guided image super-resolution, and image de-blurring. The results consistently demonstrate that our Rubik's cube operator enhances performance across all tasks. Code is publicly available at https://github.com/zheng980629/RubikCube.
|
Rubik's Cube: High-Order Channel Interactions with a Hierarchical Receptive Field
|
[
"Naishan Zheng",
"Man Zhou",
"Chong Zhou",
"Chen Change Loy"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hNpedVWwoe
|
@inproceedings{
zandieh2023near,
title={Near Optimal Reconstruction of Spherical Harmonic Expansions},
author={Amir Zandieh and Insu Han and Haim Avron},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hNpedVWwoe}
}
|
We propose an algorithm for robust recovery of the spherical harmonic expansion of functions defined on the $d$-dimensional unit sphere $\mathbb{S}^{d-1}$ using a near-optimal number of function evaluations. We show that for any $f\in L^2(\mathbb{S}^{d-1})$, the number of evaluations of $f$ needed to recover its degree-$q$ spherical harmonic expansion equals the dimension of the space of spherical harmonics of degree at most $q$, up to a logarithmic factor. Moreover, we develop a simple yet efficient kernel regression-based algorithm to recover degree-$q$ expansion of $f$ by only evaluating the function on uniformly sampled points on $\mathbb{S}^{d-1}$. Our algorithm is built upon the connections between spherical harmonics and Gegenbauer polynomials. Unlike the prior results on fast spherical harmonic transform, our proposed algorithm works efficiently using a nearly optimal number of samples in any dimension $d$. Furthermore, we illustrate the empirical performance of our algorithm on numerical examples.
|
Near Optimal Reconstruction of Spherical Harmonic Expansions
|
[
"Amir Zandieh",
"Insu Han",
"Haim Avron"
] |
Conference
|
poster
|
2202.12995
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hN4qpvGzWn
|
@inproceedings{
wu2023game,
title={Game Solving with Online Fine-Tuning},
author={Ti-Rong Wu and Hung Guei and Ting Han Wei and Chung-Chin Shih and Jui-Te Chin and I-Chen Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hN4qpvGzWn}
}
|
Game solving is a similar, yet more difficult task than mastering a game. Solving a game typically means to find the game-theoretic value (outcome given optimal play), and optionally a full strategy to follow in order to achieve that outcome. The AlphaZero algorithm has demonstrated super-human level play, and its powerful policy and value predictions have also served as heuristics in game solving. However, to solve a game and obtain a full strategy, a winning response must be found for all possible moves by the losing player. This includes very poor lines of play from the losing side, for which the AlphaZero self-play process will not encounter. AlphaZero-based heuristics can be highly inaccurate when evaluating these out-of-distribution positions, which occur throughout the entire search. To address this issue, this paper investigates applying online fine-tuning while searching and proposes two methods to learn tailor-designed heuristics for game solving. Our experiments show that using online fine-tuning can solve a series of challenging 7x7 Killall-Go problems, using only 23.54\% of computation time compared to the baseline without online fine-tuning. Results suggest that the savings scale with problem size. Our method can further be extended to any tree search algorithm for problem solving. Our code is available at https://rlg.iis.sinica.edu.tw/papers/neurips2023-online-fine-tuning-solver.
|
Game Solving with Online Fine-Tuning
|
[
"Ti-Rong Wu",
"Hung Guei",
"Ting Han Wei",
"Chung-Chin Shih",
"Jui-Te Chin",
"I-Chen Wu"
] |
Conference
|
poster
|
2311.07178
|
[
"https://github.com/rlglab/online-fine-tuning-solver"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hLoanbRrjM
|
@inproceedings{
oldfield2023parts,
title={Parts of Speech{\textendash}Grounded Subspaces in Vision-Language Models},
author={James Oldfield and Christos Tzelepis and Yannis Panagakis and Mihalis Nicolaou and Ioannis Patras},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hLoanbRrjM}
}
|
Latent image representations arising from vision-language models have proved immensely useful for a variety of downstream tasks. However, their utility is limited by their entanglement with respect to different visual attributes. For instance, recent work has shown that CLIP image representations are often biased toward specific visual properties (such as objects or actions) in an unpredictable manner. In this paper, we propose to separate representations of the different visual modalities in CLIP’s joint vision-language space by leveraging the association between parts of speech and specific visual modes of variation (e.g. nouns relate to objects, adjectives describe appearance). This is achieved by formulating an appropriate component analysis model that learns subspaces capturing variability corresponding to a specific part of speech, while jointly minimising variability to the rest. Such a subspace yields disentangled representations of the different visual properties of an image or text in closed form while respecting the underlying geometry of the manifold on which the representations lie. What’s more, we show the proposed model additionally facilitates learning subspaces corresponding to specific visual appearances (e.g. artists’ painting styles), which enables the selective removal of entire visual themes from CLIP-based text-to-image synthesis. We validate the model both qualitatively, by visualising the subspace projections with a text-to-image model and by preventing the imitation of artists’ styles, and quantitatively, through class invariance metrics and improvements to baseline zero-shot classification.
|
Parts of Speech–Grounded Subspaces in Vision-Language Models
|
[
"James Oldfield",
"Christos Tzelepis",
"Yannis Panagakis",
"Mihalis Nicolaou",
"Ioannis Patras"
] |
Conference
|
poster
|
[
"https://github.com/james-oldfield/pos-subspaces"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hLPJ7xLbNF
|
@inproceedings{
pan2023selfsupervised,
title={Self-Supervised Motion Magnification by Backpropagating Through Optical Flow},
author={Zhaoying Pan and Daniel Geng and Andrew Owens},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hLPJ7xLbNF}
}
|
This paper presents a simple, self-supervised method for magnifying subtle motions in video: given an input video and a magnification factor, we manipulate the video such that its new optical flow is scaled by the desired amount. To train our model, we propose a loss function that estimates the optical flow of the generated video and penalizes how far if deviates from the given magnification factor. Thus, training involves differentiating through a pretrained optical flow network. Since our model is self-supervised, we can further improve its performance through test-time adaptation, by finetuning it on the input video. It can also be easily extended to magnify the motions of only user-selected objects. Our approach avoids the need for synthetic magnification datasets that have been used to train prior learning-based approaches. Instead, it leverages the existing capabilities of off-the-shelf motion estimators. We demonstrate the effectiveness of our method through evaluations of both visual quality and quantitative metrics on a range of real-world and synthetic videos, and we show our method works for both supervised and unsupervised optical flow methods.
|
Self-Supervised Motion Magnification by Backpropagating Through Optical Flow
|
[
"Zhaoying Pan",
"Daniel Geng",
"Andrew Owens"
] |
Conference
|
poster
|
2311.17056
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hJzEoQHfCe
|
@inproceedings{
coleman2023unified,
title={Unified Embedding: Battle-Tested Feature Representations for Web-Scale {ML} Systems},
author={Benjamin Coleman and Wang-Cheng Kang and Matthew Fahrbach and Ruoxi Wang and Lichan Hong and Ed H. Chi and Derek Zhiyuan Cheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hJzEoQHfCe}
}
|
Learning high-quality feature embeddings efficiently and effectively is critical for the performance of web-scale machine learning systems. A typical model ingests hundreds of features with vocabularies on the order of millions to billions of tokens. The standard approach is to represent each feature value as a $d$-dimensional embedding, which introduces hundreds of billions of parameters for extremely high-cardinality features. This bottleneck has led to substantial progress in alternative embedding algorithms. Many of these methods, however, make the assumption that each feature uses an independent embedding table. This work introduces a simple yet highly effective framework, Feature Multiplexing, where one single representation space is used for many different categorical features. Our theoretical and empirical analysis reveals that multiplexed embeddings can be decomposed into components from each constituent feature, allowing models to distinguish between features. We show that multiplexed representations give Pareto-optimal space-accuracy tradeoffs for three public benchmark datasets. Further, we propose a highly practical approach called Unified Embedding with three major benefits: simplified feature configuration, strong adaptation to dynamic data distributions, and compatibility with modern hardware. Unified embedding gives significant improvements in offline and online metrics compared to highly competitive baselines across five web-scale search, ads, and recommender systems, where it serves billions of users across the world in industry-leading products.
|
Unified Embedding: Battle-Tested Feature Representations for Web-Scale ML Systems
|
[
"Benjamin Coleman",
"Wang-Cheng Kang",
"Matthew Fahrbach",
"Ruoxi Wang",
"Lichan Hong",
"Ed H. Chi",
"Derek Zhiyuan Cheng"
] |
Conference
|
spotlight
|
2305.12102
|
[
""
] |
https://huggingface.co/papers/2305.12102
| 0 | 2 | 0 | 7 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=hIGZujtOQv
|
@inproceedings{
sui2023unleashing,
title={Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift},
author={Yongduo Sui and Qitian Wu and Jiancan Wu and Qing Cui and Longfei Li and JUN ZHOU and Xiang Wang and Xiangnan He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hIGZujtOQv}
}
|
The issue of distribution shifts is emerging as a critical concern in graph representation learning. From the perspective of invariant learning and stable learning, a recently well-established paradigm for out-of-distribution generalization, stable features of the graph are assumed to causally determine labels, while environmental features tend to be unstable and can lead to the two primary types of distribution shifts. The correlation shift is often caused by the spurious correlation between environmental features and labels that differs between the training and test data; the covariate shift often stems from the presence of new environmental features in test data. However, most strategies, such as invariant learning or graph augmentation, typically struggle with limited training environments or perturbed stable features, thus exposing limitations in handling the problem of covariate shift. To address this challenge, we propose a simple-yet-effective data augmentation strategy, Adversarial Invariant Augmentation (AIA), to handle the covariate shift on graphs. Specifically, given the training data, AIA aims to extrapolate and generate new environments, while concurrently preserving the original stable features during the augmentation process. Such a design equips the graph classification model with an enhanced capability to identify stable features in new environments, thereby effectively tackling the covariate shift in data. Extensive experiments with in-depth empirical analysis demonstrate the superiority of our approach. The implementation codes are publicly available at https://github.com/yongduosui/AIA.
|
Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift
|
[
"Yongduo Sui",
"Qitian Wu",
"Jiancan Wu",
"Qing Cui",
"Longfei Li",
"JUN ZHOU",
"Xiang Wang",
"Xiangnan He"
] |
Conference
|
poster
|
2211.02843
|
[
"https://github.com/yongduosui/aia"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hI6EPhq70A
|
@inproceedings{
shahreza2023face,
title={Face Reconstruction from Facial Templates by Learning Latent Space of a Generator Network},
author={Hatef Otroshi Shahreza and S{\'e}bastien Marcel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hI6EPhq70A}
}
|
In this paper, we focus on the template inversion attack against face recognition systems and propose a new method to reconstruct face images from facial templates. Within a generative adversarial network (GAN)-based framework, we learn a mapping from facial templates to the intermediate latent space of a pre-trained face generation network, from which we can generate high-resolution realistic reconstructed face images. We show that our proposed method can be applied in whitebox and blackbox attacks against face recognition systems. Furthermore, we evaluate the transferability of our attack when the adversary uses the reconstructed face image to impersonate the underlying subject in an attack against another face recognition system. Considering the adversary's knowledge and the target face recognition system, we define five different attacks and evaluate the vulnerability of state-of-the-art face recognition systems. Our experiments show that our proposed method achieves high success attack rates in whitebox and blackbox scenarios. Furthermore, the reconstructed face images are transferable and can be used to enter target face recognition systems with a different feature extractor model. We also explore important areas in the reconstructed face images that can fool the target face recognition system.
|
Face Reconstruction from Facial Templates by Learning Latent Space of a Generator Network
|
[
"Hatef Otroshi Shahreza",
"Sébastien Marcel"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hHv3UuffXV
|
@inproceedings{
liu2023block,
title={Block Broyden's Methods for Solving Nonlinear Equations},
author={Chengchang Liu and Cheng Chen and Luo Luo and John C.S. Lui},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hHv3UuffXV}
}
|
This paper studies quasi-Newton methods for solving nonlinear equations. We propose block variants of both good and bad Broyden's methods, which enjoy explicit local superlinear convergence rates. Our block good Broyden's method has faster condition-number-free convergence rate than existing Broyden's methods because it takes the advantage of multiple rank modification on the Jacobian estimator. On the other hand, our block bad Broyden's method directly estimates the inverse of the Jacobian provably, which reduces the computational cost of the iteration. Our theoretical results provide some new insights on why good Broyden's method outperforms bad Broyden's method in most of the cases. The empirical results also demonstrate the superiority of our methods and validate our theoretical analysis.
|
Block Broyden's Methods for Solving Nonlinear Equations
|
[
"Chengchang Liu",
"Cheng Chen",
"Luo Luo",
"John C.S. Lui"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hHUZ5V9XFu
|
@inproceedings{
song2023equivariant,
title={Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation},
author={Yuxuan Song and Jingjing Gong and Minkai Xu and Ziyao Cao and Yanyan Lan and Stefano Ermon and Hao Zhou and Wei-Ying Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hHUZ5V9XFu}
}
|
The generation of 3D molecules requires simultaneously deciding the categorical features (atom types) and continuous features (atom coordinates). Deep generative models, especially Diffusion Models (DMs), have demonstrated effectiveness in generating feature-rich geometries. However, existing DMs typically suffer from unstable probability dynamics with inefficient sampling speed. In this paper, we introduce geometric flow matching, which enjoys the advantages of both equivariant modeling and stabilized probability dynamics.
More specifically, we propose a hybrid probability path where the coordinates probability path is regularized by an equivariant optimal transport, and the information between different modalities is aligned. Experimentally, the proposed method could consistently achieve better performance on multiple molecule generation benchmarks with 4.75$\times$ speed up of sampling on average.
|
Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation
|
[
"Yuxuan Song",
"Jingjing Gong",
"Minkai Xu",
"Ziyao Cao",
"Yanyan Lan",
"Stefano Ermon",
"Hao Zhou",
"Wei-Ying Ma"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hExFOGZTSt
|
@inproceedings{
cook2023creating,
title={Creating a Public Repository for Joining Private Data},
author={James Cook and Milind Shyani and Nina Mishra},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hExFOGZTSt}
}
|
How can one publish a dataset with sensitive attributes in a way that both preserves privacy and enables joins with other datasets on those same sensitive attributes? This problem arises in many contexts, e.g., a hospital and an airline may want to jointly determine whether people who take long-haul flights are more likely to catch respiratory infections. If they join their data by a common keyed user identifier such as email address, they can determine the answer, though it breaks privacy. This paper shows how the hospital can generate a private sketch and how the airline can privately join with the hospital's sketch by email address. The proposed solution satisfies pure differential privacy and gives approximate answers to linear queries and optimization problems over those joins. Whereas prior work such as secure function evaluation requires sender/receiver interaction, a distinguishing characteristic of the proposed approach is that it is non-interactive. Consequently, the sketch can be published to a repository for any organization to join with, facilitating data discovery. The accuracy of the method is demonstrated through both theoretical analysis and extensive empirical evidence.
|
Creating a Public Repository for Joining Private Data
|
[
"James Cook",
"Milind Shyani",
"Nina Mishra"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hElNdYMs8Z
|
@inproceedings{
chen2023a,
title={A Finite-Sample Analysis of Payoff-Based Independent Learning in Zero-Sum Stochastic Games},
author={Zaiwei Chen and Kaiqing Zhang and Eric Mazumdar and Asuman E. Ozdaglar and Adam Wierman},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hElNdYMs8Z}
}
|
In this work, we study two-player zero-sum stochastic games and develop a variant of the smoothed best-response learning dynamics that combines independent learning dynamics for matrix games with the minimax value iteration for stochastic games. The resulting learning dynamics are payoff-based, convergent, rational, and symmetric between the two players. Our theoretical results present to the best of our knowledge the first last-iterate finite-sample analysis of such independent learning dynamics. To establish the results, we develop a coupled Lyapunov drift approach to capture the evolution of multiple sets of coupled and stochastic iterates, which might be of independent interest.
|
A Finite-Sample Analysis of Payoff-Based Independent Learning in Zero-Sum Stochastic Games
|
[
"Zaiwei Chen",
"Kaiqing Zhang",
"Eric Mazumdar",
"Asuman E. Ozdaglar",
"Adam Wierman"
] |
Conference
|
poster
|
2303.03100
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hE7PG1lUZx
|
@inproceedings{
li2023unitsface,
title={Uni{TSF}ace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition},
author={Qiufu Li and Xi Jia and Jiancan Zhou and Linlin Shen and Jinming Duan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hE7PG1lUZx}
}
|
Sample-to-class-based face recognition models can not fully explore the cross-sample relationship among large amounts of facial images, while sample-to-sample-based models require sophisticated pairing processes for training. Furthermore, neither method satisfies the requirements of real-world face verification applications, which expect a unified threshold separating positive from negative facial pairs. In this paper, we propose a unified threshold integrated sample-to-sample based loss (USS loss), which features an explicit unified threshold for distinguishing positive from negative pairs. Inspired by our USS loss, we also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship. Extensive evaluation on multiple benchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace, demonstrates that the proposed USS loss is highly efficient and can work seamlessly with sample-to-class-based losses. The embedded loss (USS and sample-to-class Softmax loss) overcomes the pitfalls of previous approaches and the trained facial model UniTSFace exhibits exceptional performance, outperforming state-of-the-art methods, such as CosFace, ArcFace, VPL, AnchorFace, and UNPG. Our code is available at https://github.com/CVI-SZU/UniTSFace.
|
UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition
|
[
"Qiufu Li",
"Xi Jia",
"Jiancan Zhou",
"Linlin Shen",
"Jinming Duan"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hE5RWzQyvf
|
@inproceedings{
taskesen2023distributionally,
title={Distributionally Robust Linear Quadratic Control},
author={Bahar Taskesen and Dan Andrei Iancu and {\c{C}}a{\u{g}}{\i}l Ko{\c{c}}yi{\u{g}}it and Daniel Kuhn},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hE5RWzQyvf}
}
|
Linear-Quadratic-Gaussian (LQG) control is a fundamental control paradigm that is studied in various fields such as engineering, computer science, economics, and neuroscience. It involves controlling a system with linear dynamics and imperfect observations, subject to additive noise, with the goal of minimizing a quadratic cost function for the state and control variables. In this work, we consider a generalization of the discrete-time, finite-horizon LQG problem, where the noise distributions are unknown and belong to Wasserstein ambiguity sets centered at nominal (Gaussian) distributions. The objective is to minimize a worst-case cost across all distributions in the ambiguity set, including non-Gaussian distributions. Despite the added complexity, we prove that a control policy that is linear in the observations is optimal for this problem, as in the classic LQG problem. We propose a numerical solution method that efficiently characterizes this optimal control policy. Our method uses the Frank-Wolfe algorithm to identify the least-favorable distributions within the Wasserstein ambiguity sets and computes the controller's optimal policy using Kalman filter estimation under these distributions.
|
Distributionally Robust Linear Quadratic Control
|
[
"Bahar Taskesen",
"Dan Andrei Iancu",
"Çağıl Koçyiğit",
"Daniel Kuhn"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hDajsofjRM
|
@inproceedings{
lin2023online,
title={Online Adaptive Policy Selection in Time-Varying Systems: No-Regret via Contractive Perturbations},
author={Yiheng Lin and James A Preiss and Emile Timothy Anand and Yingying Li and Yisong Yue and Adam Wierman},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hDajsofjRM}
}
|
We study online adaptive policy selection in systems with time-varying costs and dynamics. We develop the Gradient-based Adaptive Policy Selection (GAPS) algorithm together with a general analytical framework for online policy selection via online optimization. Under our proposed notion of contractive policy classes, we show that GAPS approximates the behavior of an ideal online gradient descent algorithm on the policy parameters while requiring less information and computation. When convexity holds, our algorithm is the first to achieve optimal policy regret. When convexity does not hold, we provide the first local regret bound for online policy selection. Our numerical experiments show that GAPS can adapt to changing environments more quickly than existing benchmarks.
|
Online Adaptive Policy Selection in Time-Varying Systems: No-Regret via Contractive Perturbations
|
[
"Yiheng Lin",
"James A Preiss",
"Emile Timothy Anand",
"Yingying Li",
"Yisong Yue",
"Adam Wierman"
] |
Conference
|
poster
|
2210.12320
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hCg4w8L8Dt
|
@inproceedings{
lu2023knowledge,
title={Knowledge Distillation for High Dimensional Search Index},
author={Zepu Lu and Jin Chen and Defu Lian and ZAIXI ZHANG and Yong Ge and Enhong Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hCg4w8L8Dt}
}
|
Lightweight compressed models are prevalent in Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) owing to their superiority of retrieval efficiency in large-scale datasets. However, results given by compressed methods are less accurate due to the curse of dimension and the limitations of optimization objectives (e.g., lacking interactions between queries and documents). Thus, we are encouraged to design a new learning algorithm for the compressed search index on high dimensions to improve retrieval performance. In this paper, we propose a novel KnowledgeDistillation for high dimensional search index framework (KDindex), with the aim of efficiently learning lightweight indexes by distilling knowledge from high-precision ANNS and MIPS models such as graph-based indexes. Specifically, the student is guided to keep the same ranking order of the top-k relevant results yielded by the teacher model, which acts as the additional supervision signals between queries and documents to learn the similarities between documents. Furthermore, to avoid the trivial solutions that all candidates are partitioned to the same centroid, the reconstruction loss that minimizes the compressed error, and the posting list balance strategy that equally allocates the candidates, are integrated into the learning objective. Experiment results demonstrate that KDindex outperforms existing learnable quantization-based indexes and is 40× lighter than the state-of-the-art non-exhaustive methods while achieving comparable recall quality.
|
Knowledge Distillation for High Dimensional Search Index
|
[
"Zepu Lu",
"Jin Chen",
"Defu Lian",
"ZAIXI ZHANG",
"Yong Ge",
"Enhong Chen"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=hCdqDkA25J
|
@inproceedings{
zhang2023optimal,
title={Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization},
author={Liang Zhang and Junchi YANG and Amin Karbasi and Niao He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hCdqDkA25J}
}
|
Algorithmic reproducibility measures the deviation in outputs of machine learning algorithms upon minor changes in the training process. Previous work suggests that first-order methods would need to trade-off convergence rate (gradient complexity) for better reproducibility. In this work, we challenge this perception and demonstrate that both optimal reproducibility and near-optimal convergence guarantees can be achieved for smooth convex minimization and smooth convex-concave minimax problems under various error-prone oracle settings. Particularly, given the inexact initialization oracle, our regularization-based algorithms achieve the best of both worlds -- optimal reproducibility and near-optimal gradient complexity -- for minimization and minimax optimization. With the inexact gradient oracle, the near-optimal guarantees also hold for minimax optimization. Additionally, with the stochastic gradient oracle, we show that stochastic gradient descent ascent is optimal in terms of both reproducibility and gradient complexity. We believe our results contribute to an enhanced understanding of the reproducibility-convergence trade-off in the context of convex optimization.
|
Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization
|
[
"Liang Zhang",
"Junchi YANG",
"Amin Karbasi",
"Niao He"
] |
Conference
|
spotlight
|
2310.17759
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=hCUG1MCFk5
|
@inproceedings{
li2023on,
title={On the Generalization Properties of Diffusion Models},
author={Puheng Li and Zhong Li and Huishuai Zhang and Jiang Bian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hCUG1MCFk5}
}
|
Diffusion models are a class of generative models that serve to establish a stochastic transport map between an empirically observed, yet unknown, target distribution and a known prior. Despite their remarkable success in real-world applications, a theoretical understanding of their generalization capabilities remains underdeveloped. This work embarks on a comprehensive theoretical exploration of the generalization attributes of diffusion models. We establish the theoretical estimates of the generalization gap that evolves in tandem with the training dynamics of score-based diffusion models, suggesting a polynomially small generalization error ($O(n^{-2/5}+m^{-4/5})$) on both the sample size $n$ and the model capacity $m$, evading the curse of dimensionality (i.e., independent of the data dimension) when *early-stopped*. Furthermore, we extend our quantitative analysis to a *data-dependent* scenario, wherein target distributions are portrayed as a succession of densities with progressively increasing distances between modes. This precisely elucidates the *adverse* effect of "*modes shift*'' in ground truths on the model generalization. Furthermore, these estimates are not solely theoretical constructs but have also been confirmed through numerical simulations. Our findings contribute to the rigorous understanding of diffusion models' generalization properties and provide insights that may guide practical applications.
|
On the Generalization Properties of Diffusion Models
|
[
"Puheng Li",
"Zhong Li",
"Huishuai Zhang",
"Jiang Bian"
] |
Conference
|
poster
|
2311.01797
|
[
"https://github.com/lphleo/diffusion_generalization"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=h8vJVABiBP
|
@inproceedings{
yang2023learning,
title={Learning Modulated Transformation in {GAN}s},
author={Ceyuan Yang and Qihang Zhang and Yinghao Xu and Jiapeng Zhu and Yujun Shen and Bo Dai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h8vJVABiBP}
}
|
The success of style-based generators largely benefits from style modulation,
which helps take care of the cross-instance variation within data. However, the
instance-wise stochasticity is typically introduced via regular convolution, where
kernels interact with features at some fixed locations, limiting its capacity for
modeling geometric variation. To alleviate this problem, we equip the generator
in generative adversarial networks (GANs) with a plug-and-play module, termed
as modulated transformation module (MTM). This module predicts spatial offsets
under the control of latent codes, based on which the convolution operation can
be applied at variable locations for different instances, and hence offers the model
an additional degree of freedom to handle geometry deformation. Extensive
experiments suggest that our approach can be faithfully generalized to various
generative tasks, including image generation, 3D-aware image synthesis, and
video generation, and get compatible with state-of-the-art frameworks without
any hyper-parameter tuning. It is noteworthy that, towards human generation on
the challenging TaiChi dataset, we improve the FID of StyleGAN3 from 21.36 to
13.60, demonstrating the efficacy of learning modulated geometry transformation.
Code and models are available at https://github.com/limbo0000/mtm.
|
Learning Modulated Transformation in GANs
|
[
"Ceyuan Yang",
"Qihang Zhang",
"Yinghao Xu",
"Jiapeng Zhu",
"Yujun Shen",
"Bo Dai"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=h6WUKM7PCI
|
@inproceedings{
vishnubhotla2023towards,
title={Towards robust and generalizable representations of extracellular data using contrastive learning},
author={Ankit Vishnubhotla and Charlotte Loh and Akash Srivastava and Liam Paninski and Cole Lincoln Hurwitz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h6WUKM7PCI}
}
|
Contrastive learning is quickly becoming an essential tool in neuroscience for extracting robust and meaningful representations of neural activity. Despite numerous applications to neuronal population data, there has been little exploration of how these methods can be adapted to key primary data analysis tasks such as spike sorting or cell-type classification. In this work, we propose a novel contrastive learning framework, CEED (Contrastive Embeddings for Extracellular Data), for high-density extracellular recordings. We demonstrate that through careful design of the network architecture and data augmentations, it is possible to generically extract representations that far outperform current specialized approaches. We validate our method across multiple high-density extracellular recordings. All code used to run CEED can be found at https://github.com/ankitvishnu23/CEED.
|
Towards robust and generalizable representations of extracellular data using contrastive learning
|
[
"Ankit Vishnubhotla",
"Charlotte Loh",
"Akash Srivastava",
"Liam Paninski",
"Cole Lincoln Hurwitz"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=h4r00NGkjR
|
@inproceedings{
wang2023videocomposer,
title={VideoComposer: Compositional Video Synthesis with Motion Controllability},
author={Xiang Wang and Hangjie Yuan and Shiwei Zhang and Dayou Chen and Jiuniu Wang and Yingya Zhang and Yujun Shen and Deli Zhao and Jingren Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h4r00NGkjR}
}
|
The pursuit of controllability as a higher standard of visual content creation has yielded remarkable progress in customizable image synthesis. However, achieving controllable video synthesis remains challenging due to the large variation of temporal dynamics and the requirement of cross-frame temporal consistency. Based on the paradigm of compositional generation, this work presents VideoComposer that allows users to flexibly compose a video with textual conditions, spatial conditions, and more importantly temporal conditions. Specifically, considering the characteristic of video data, we introduce the motion vector from compressed videos as an explicit control signal to provide guidance regarding temporal dynamics. In addition, we develop a Spatio-Temporal Condition encoder (STC-encoder) that serves as a unified interface to effectively incorporate the spatial and temporal relations of sequential inputs, with which the model could make better use of temporal conditions and hence achieve higher inter-frame consistency. Extensive experimental results suggest that VideoComposer is able to control the spatial and temporal patterns simultaneously within a synthesized video in various forms, such as text description, sketch sequence, reference video, or even simply hand-crafted motions. The code and models are publicly available at
https://videocomposer.github.io.
|
VideoComposer: Compositional Video Synthesis with Motion Controllability
|
[
"Xiang Wang",
"Hangjie Yuan",
"Shiwei Zhang",
"Dayou Chen",
"Jiuniu Wang",
"Yingya Zhang",
"Yujun Shen",
"Deli Zhao",
"Jingren Zhou"
] |
Conference
|
poster
|
2306.02018
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=h3lTrt4Ftb
|
@inproceedings{
hosseini2023large,
title={Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language.},
author={Eghbal A. Hosseini and Evelina Fedorenko},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h3lTrt4Ftb}
}
|
Predicting upcoming events is critical to our ability to effectively interact with our
environment and conspecifics. In natural language processing, transformer models,
which are trained on next-word prediction, appear to construct a general-purpose
representation of language that can support diverse downstream tasks. However, we
still lack an understanding of how a predictive objective shapes such representations.
Inspired by recent work in vision neuroscience Hénaff et al. (2019), here we test a
hypothesis about predictive representations of autoregressive transformer models.
In particular, we test whether the neural trajectory of a sequence of words in a
sentence becomes progressively more straight as it passes through the layers of the
network. The key insight behind this hypothesis is that straighter trajectories should
facilitate prediction via linear extrapolation. We quantify straightness using a 1-
dimensional curvature metric, and present four findings in support of the trajectory
straightening hypothesis: i) In trained models, the curvature progressively decreases
from the first to the middle layers of the network. ii) Models that perform better on
the next-word prediction objective, including larger models and models trained on
larger datasets, exhibit greater decreases in curvature, suggesting that this improved
ability to straighten sentence neural trajectories may be the underlying driver of
better language modeling performance. iii) Given the same linguistic context, the
sequences that are generated by the model have lower curvature than the ground
truth (the actual continuations observed in a language corpus), suggesting that
the model favors straighter trajectories for making predictions. iv) A consistent
relationship holds between the average curvature and the average surprisal of
sentences in the middle layers of models, such that sentences with straighter neural
trajectories also have lower surprisal. Importantly, untrained models don’t exhibit
these behaviors. In tandem, these results support the trajectory straightening
hypothesis and provide a possible mechanism for how the geometry of the internal
representations of autoregressive models supports next word prediction.
|
Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language.
|
[
"Eghbal A. Hosseini",
"Evelina Fedorenko"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=h3kuB4z2G9
|
@inproceedings{
shah2023frontdoor,
title={Front-door Adjustment Beyond Markov Equivalence with Limited Graph Knowledge},
author={Abhin Shah and Karthikeyan Shanmugam and Murat Kocaoglu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h3kuB4z2G9}
}
|
Causal effect estimation from data typically requires assumptions about the cause-effect relations either explicitly in the form of a causal graph structure within the Pearlian framework, or implicitly in terms of (conditional) independence statements between counterfactual variables within the potential outcomes framework. When the treatment variable and the outcome variable are confounded, front-door adjustment is an important special case where, given the graph, causal effect of the treatment on the target can be estimated using post-treatment variables. However, the exact formula for front-door adjustment depends on the structure of the graph, which is difficult to learn in practice. In this work, we provide testable conditional independence statements to compute the causal effect using front-door-like adjustment without knowing the graph under limited structural side information. We show that our method is applicable in scenarios where knowing the Markov equivalence class is not sufficient for causal effect estimation. We demonstrate the effectiveness of our method on a class of random graphs as well as real causal fairness benchmarks.
|
Front-door Adjustment Beyond Markov Equivalence with Limited Graph Knowledge
|
[
"Abhin Shah",
"Karthikeyan Shanmugam",
"Murat Kocaoglu"
] |
Conference
|
poster
|
2306.11008
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=h3QNH3qeC3
|
@inproceedings{
liu2023customizable,
title={Customizable Image Synthesis with Multiple Subjects},
author={Zhiheng Liu and Yifei Zhang and Yujun Shen and Kecheng Zheng and Kai Zhu and Ruili Feng and Yu Liu and Deli Zhao and Jingren Zhou and Yang Cao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h3QNH3qeC3}
}
|
Synthesizing images with user-specified subjects has received growing attention due to its practical applications. Despite the recent success in single subject customization, existing algorithms suffer from high training cost and low success rate along with increased number of subjects. Towards controllable image synthesis with multiple subjects as the constraints, this work studies how to efficiently represent a particular subject as well as how to appropriately compose different subjects. We find that the text embedding regarding the subject token already serves as a simple yet effective representation that supports arbitrary combinations without any model tuning. Through learning a residual on top of the base embedding, we manage to robustly shift the raw subject to the customized subject given various text conditions. We then propose to employ layout, a very abstract and easy-to-obtain prior, as the spatial guidance for subject arrangement. By rectifying the activations in the cross-attention map, the layout appoints and separates the location of different subjects in the image, significantly alleviating the interference across them. Using cross-attention map as the intermediary, we could strengthen the signal of target subjects and weaken the signal of irrelevant subjects within a certain region, significantly alleviating the interference across subjects. Both qualitative and quantitative experimental results demonstrate our superiority over state-of-the-art alternatives under a variety of settings for multi-subject customization.
|
Customizable Image Synthesis with Multiple Subjects
|
[
"Zhiheng Liu",
"Yifei Zhang",
"Yujun Shen",
"Kecheng Zheng",
"Kai Zhu",
"Ruili Feng",
"Yu Liu",
"Deli Zhao",
"Jingren Zhou",
"Yang Cao"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=h3MShWMxNt
|
@inproceedings{
patro2023scattering,
title={Scattering Vision Transformer: Spectral Mixing Matters},
author={Badri Narayana Patro and Vijay Srinivas Agneeswaran},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h3MShWMxNt}
}
|
Vision transformers have gained significant attention and achieved state-of-the-art performance in various computer vision tasks, including image classification, instance segmentation, and object detection. However, challenges remain in addressing attention complexity and effectively capturing fine-grained information within images. Existing solutions often resort to down-sampling operations, such as pooling, to reduce computational cost. Unfortunately, such operations are non-invertible and can result in information loss. In this paper, we present a novel approach called Scattering Vision Transformer (SVT) to tackle these challenges. SVT incorporates a spectrally scattering network that enables the capture of intricate image details. SVT overcomes the invertibility issue associated with down-sampling operations by separating low-frequency and high-frequency components. Furthermore, SVT introduces a unique spectral gating network utilizing Einstein multiplication for token and channel mixing, effectively reducing complexity. We show that SVT achieves state-of-the-art performance on the ImageNet dataset with a significant reduction in a number of parameters and FLOPS. SVT shows 2\% improvement over LiTv2 and iFormer. SVT-H-S reaches 84.2\% top-1 accuracy, while SVT-H-B reaches 85.2\% (state-of-art for base versions) and SVT-H-L reaches 85.7\% (again state-of-art for large versions). SVT also shows comparable results in other vision tasks such as instance segmentation. SVT also outperforms other transformers in transfer learning on standard datasets such as CIFAR10, CIFAR100, Oxford Flower, and Stanford Car datasets. The project page is available on this webpage.\url{https://badripatro.github.io/svt/}.
|
Scattering Vision Transformer: Spectral Mixing Matters
|
[
"Badri Narayana Patro",
"Vijay Srinivas Agneeswaran"
] |
Conference
|
poster
|
2311.01310
|
[
""
] |
https://huggingface.co/papers/2311.01310
| 1 | 0 | 0 | 2 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=h3CGHf7457
|
@inproceedings{
xu2023multimodal,
title={Multi-modal Queried Object Detection in the Wild},
author={Yifan Xu and Mengdan Zhang and Chaoyou Fu and Peixian Chen and Xiaoshan Yang and Ke Li and Changsheng Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h3CGHf7457}
}
|
We introduce MQ-Det, an efficient architecture and pre-training strategy design to utilize both textual description with open-set generalization and visual exemplars with rich description granularity as category queries, namely, Multi-modal Queried object Detection, for real-world detection with both open-vocabulary categories and various granularity. MQ-Det incorporates vision queries into existing well-established language-queried-only detectors. A plug-and-play gated class-scalable perceiver module upon the frozen detector is proposed to augment category text with class-wise visual information. To address the learning inertia problem brought by the frozen detector, a vision conditioned masked language prediction strategy is proposed. MQ-Det's simple yet effective architecture and training strategy design is compatible with most language-queried object detectors, thus yielding versatile applications. Experimental results demonstrate that multi-modal queries largely boost open-world detection. For instance, MQ-Det significantly improves the state-of-the-art open-set detector GLIP by +7.8% AP on the LVIS benchmark via multi-modal queries without any downstream finetuning, and averagely +6.3% AP on 13 few-shot downstream tasks, with merely additional 3% modulating time required by GLIP. Code is available at https://github.com/YifanXu74/MQ-Det.
|
Multi-modal Queried Object Detection in the Wild
|
[
"Yifan Xu",
"Mengdan Zhang",
"Chaoyou Fu",
"Peixian Chen",
"Xiaoshan Yang",
"Ke Li",
"Changsheng Xu"
] |
Conference
|
poster
|
2305.18980
|
[
"https://github.com/yifanxu74/mq-det"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=h2lkx9SQCD
|
@inproceedings{
ganesh2023faster,
title={Faster Differentially Private Convex Optimization via Second-Order Methods},
author={Arun Ganesh and Mahdi Haghifam and Thomas Steinke and Abhradeep Guha Thakurta},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h2lkx9SQCD}
}
|
Differentially private (stochastic) gradient descent is the workhorse of DP private machine learning in both the convex and non-convex settings. Without privacy constraints, second-order methods, like Newton's method, converge faster than first-order methods like gradient descent. In this work, we investigate the prospect of using the second-order information from the loss function to accelerate DP convex optimization. We first develop a private variant of the regularized cubic Newton method of Nesterov and Polyak, and show that for the class of strongly convex loss functions, our algorithm has quadratic convergence and achieves the optimal excess loss. We then design a practical second-order DP algorithm for the unconstrained logistic regression problem. We theoretically and empirically study the performance of our algorithm. Empirical results show our algorithm consistently achieves the best excess loss compared to other baselines and is 10-40x faster than DP-GD/DP-SGD for challenging datasets.
|
Faster Differentially Private Convex Optimization via Second-Order Methods
|
[
"Arun Ganesh",
"Mahdi Haghifam",
"Thomas Steinke",
"Abhradeep Guha Thakurta"
] |
Conference
|
poster
|
2305.13209
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=h1FhXVM0cB
|
@inproceedings{
nguyen2023improved,
title={Improved Convergence in High Probability of Clipped Gradient Methods with Heavy Tailed Noise},
author={Ta Duy Nguyen and Thien Hang Nguyen and Alina Ene and Huy Nguyen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h1FhXVM0cB}
}
|
In this work, we study the convergence in high probability of clipped gradient methods when the noise distribution has heavy tails, i.e., with bounded $p$th moments, for some $1<p\le2$. Prior works in this setting follow the same recipe of using concentration inequalities and an inductive argument with union bound to bound the iterates across all iterations. This method results in an increase in the failure probability by a factor of $T$, where $T$ is the number of iterations. We instead propose a new analysis approach based on bounding the moment generating function of a well chosen supermartingale sequence. We improve the dependency on $T$ in the convergence guarantee for a wide range of algorithms with clipped gradients, including stochastic (accelerated) mirror descent for convex objectives and stochastic gradient descent for nonconvex objectives. Our high probability bounds achieve the optimal convergence rates and match the best currently known in-expectation bounds. Our approach naturally allows the algorithms to use time-varying step sizes and clipping parameters when the time horizon is unknown, which appears difficult or even impossible using the techniques from prior works. Furthermore, we show that in the case of clipped stochastic mirror descent, several problem constants, including the initial distance to the optimum, are not required when setting step sizes and clipping parameters.
|
Improved Convergence in High Probability of Clipped Gradient Methods with Heavy Tailed Noise
|
[
"Ta Duy Nguyen",
"Thien Hang Nguyen",
"Alina Ene",
"Huy Nguyen"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=h0RVoZuUl6
|
@inproceedings{
hounie2023resilient,
title={Resilient Constrained Learning},
author={Ignacio Hounie and Alejandro Ribeiro and Luiz F. O. Chamon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=h0RVoZuUl6}
}
|
When deploying machine learning solutions, they must satisfy multiple requirements beyond accuracy, such as fairness, robustness, or safety. These requirements are imposed during training either implicitly, using penalties, or explicitly, using constrained optimization methods based on Lagrangian duality. Either way, specifying requirements is hindered by the presence of compromises and limited prior knowledge about the data. Furthermore, their impact on performance can often only be evaluated by actually solving the learning problem. This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task. To do so, it relaxes the learning constraints in a way that contemplates how much they affect the task at hand by balancing the performance gains obtained from the relaxation against a user-defined cost of that relaxation. We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation. We show conditions under which this balance can be achieved and introduce a practical algorithm to compute it, for which we derive approximation and generalization guarantees. We showcase the advantages of this resilient learning method in image classification tasks involving multiple potential invariances and in federated learning under distribution shift.
|
Resilient Constrained Learning
|
[
"Ignacio Hounie",
"Alejandro Ribeiro",
"Luiz F. O. Chamon"
] |
Conference
|
poster
|
2306.02426
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gzCS252hCO
|
@inproceedings{
le2023voicebox,
title={Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale},
author={Matthew Le and Apoorv Vyas and Bowen Shi and Brian Karrer and Leda Sari and Rashel Moritz and Mary Williamson and Vimal Manohar and Yossi Adi and Jay Mahadeokar and Wei-Ning Hsu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gzCS252hCO}
}
|
Large-scale generative models such as GPT and DALL-E have revolutionized the research community. These models not only generate high fidelity outputs, but are also generalists which can solve tasks not explicitly taught. In contrast, speech generative models are still primitive in terms of scale and task generalization. In this paper, we present Voicebox, the most versatile text-guided generative model for speech at scale. Voicebox is a non-autoregressive flow-matching model trained to infill speech, given audio context and text, trained on over 50K hours of speech that are not filtered or enhanced. Similar to GPT, Voicebox can perform many different tasks through in-context learning, but is more flexible as it can also condition on future context. Voicebox can be used for mono or cross-lingual zero-shot text-to-speech synthesis, noise removal, content editing, style conversion, and diverse sample generation. In particular, Voicebox outperforms the state-of-the-art zero-shot TTS model VALL-E on both intelligibility (5.9\% vs 1.9\% word error rates) and audio similarity (0.580 vs 0.681) while being up to 20 times faster. Audio samples can be found in \url{https://voicebox.metademolab.com}.
|
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
|
[
"Matthew Le",
"Apoorv Vyas",
"Bowen Shi",
"Brian Karrer",
"Leda Sari",
"Rashel Moritz",
"Mary Williamson",
"Vimal Manohar",
"Yossi Adi",
"Jay Mahadeokar",
"Wei-Ning Hsu"
] |
Conference
|
poster
|
2306.15687
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gySmwdmVDF
|
@inproceedings{
hou2023querybased,
title={Query-based Temporal Fusion with Explicit Motion for 3D Object Detection},
author={Jinghua Hou and Zhe Liu and dingkang liang and Zhikang Zou and Xiaoqing Ye and Xiang Bai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gySmwdmVDF}
}
|
Effectively utilizing temporal information to improve 3D detection performance is vital for autonomous driving vehicles. Existing methods either conduct temporal fusion based on the dense BEV features or sparse 3D proposal features. However, the former does not pay more attention to foreground objects, leading to more computation costs and sub-optimal performance. The latter implements time-consuming operations to generate sparse 3D proposal features, and the performance is limited by the quality of 3D proposals. In this paper, we propose a simple and effective Query-based Temporal Fusion Network (QTNet). The main idea is to exploit the object queries in previous frames to enhance the representation of current object queries by the proposed Motion-guided Temporal Modeling (MTM) module, which utilizes the spatial position information of object queries along the temporal dimension to construct their relevance between adjacent frames reliably. Experimental results show our proposed QTNet outperforms BEV-based or proposal-based manners on the nuScenes dataset. Besides, the MTM is a plug-and-play module, which can be integrated into some advanced LiDAR-only or multi-modality 3D detectors and even brings new SOTA performance with negligible computation cost and latency on the nuScenes dataset. These experiments powerfully illustrate the superiority and generalization of our method. The code is available at https://github.com/AlmoonYsl/QTNet.
|
Query-based Temporal Fusion with Explicit Motion for 3D Object Detection
|
[
"Jinghua Hou",
"Zhe Liu",
"dingkang liang",
"Zhikang Zou",
"Xiaoqing Ye",
"Xiang Bai"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gx20B4ItIw
|
@inproceedings{
guo2023emergent,
title={Emergent Communication for Rules Reasoning},
author={Yuxuan Guo and Yifan Hao and Rui Zhang and Enshuai Zhou and Zidong Du and Xishan Zhang and Xinkai Song and Yuanbo Wen and Yongwei Zhao and Xuehai Zhou and Jiaming Guo and Qi Yi and Shaohui Peng and Di Huang and Ruizhi Chen and Qi Guo and Yunji Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gx20B4ItIw}
}
|
Research on emergent communication between deep-learning-based agents has received extensive attention due to its inspiration for linguistics and artificial intelligence.
However, previous attempts have hovered around emerging communication under perception-oriented environmental settings,
that forces agents to describe low-level perceptual features intra image or symbol contexts.
In this work, inspired by the classic human reasoning test (namely Raven's Progressive Matrix), we propose the Reasoning Game, a cognition-oriented environment that encourages agents to reason and communicate high-level rules, rather than perceived low-level contexts.
Moreover, we propose 1) an unbiased dataset (namely rule-RAVEN) as a benchmark to avoid overfitting, 2) and a two-stage curriculum agent training method as a baseline for more stable convergence in the Reasoning Game,
where contexts and semantics are bilaterally drifting.
Experimental results show that, in the Reasoning Game, a semantically stable and compositional language emerges to solve reasoning problems.
The emerged language helps agents apply the extracted rules to the generalization of unseen context attributes, and to the transfer between different context attributes or even tasks.
|
Emergent Communication for Rules Reasoning
|
[
"Yuxuan Guo",
"Yifan Hao",
"Rui Zhang",
"Enshuai Zhou",
"Zidong Du",
"Xishan Zhang",
"Xinkai Song",
"Yuanbo Wen",
"Yongwei Zhao",
"Xuehai Zhou",
"Jiaming Guo",
"Qi Yi",
"Shaohui Peng",
"Di Huang",
"Ruizhi Chen",
"Qi Guo",
"Yunji Chen"
] |
Conference
|
poster
|
2311.04474
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gwvwbsnTps
|
@inproceedings{
gollapudi2023composable,
title={Composable Coresets for Determinant Maximization: Greedy is Almost Optimal},
author={Siddharth Gollapudi and Sepideh Mahabadi and Varun Sivashankar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gwvwbsnTps}
}
|
Given a set of $n$ vectors in $\mathbb{R}^d$, the goal of the \emph{determinant maximization} problem is to pick $k$ vectors with the maximum volume.
Determinant maximization is the MAP-inference task for determinantal point processes (DPP) and has recently received considerable attention for modeling diversity.
As most applications for the problem use large amounts of data, this problem has been studied in the relevant \textit{composable coreset} setting.
In particular, [Indyk-Mahabadi-OveisGharan-Rezaei--SODA'20, ICML'19] showed that one can get composable coresets with optimal approximation factor of $\tilde O(k)^k$ for the problem, and that a local search algorithm achieves an almost optimal approximation guarantee of $O(k)^{2k}$.
In this work, we show that the widely-used Greedy algorithm also provides composable coresets with an almost optimal approximation factor of $O(k)^{3k}$, which improves over the previously known guarantee of $C^{k^2}$, and supports the prior experimental results showing the practicality of the greedy algorithm as a coreset.
Our main result follows by showing a local optimality property for Greedy:
swapping a single point from the greedy solution with a vector that was not picked by the greedy algorithm can increase the volume by a factor of at most $(1+\sqrt{k})$. This is tight up to the additive constant $1$. Finally, our experiments show that the local optimality of the greedy algorithm is even lower than the theoretical bound on real data sets.
|
Composable Coresets for Determinant Maximization: Greedy is Almost Optimal
|
[
"Siddharth Gollapudi",
"Sepideh Mahabadi",
"Varun Sivashankar"
] |
Conference
|
poster
|
2309.15286
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=guyhQMSp2F
|
@inproceedings{
heo2023use,
title={Use perturbations when learning from explanations},
author={Juyeon Heo and Vihari Piratla and Matthew Robert Wicker and Adrian Weller},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=guyhQMSp2F}
}
|
Machine learning from explanations (MLX) is an approach to learning that uses human-provided explanations of relevant or irrelevant features for each input to ensure that model predictions are right for the right reasons. Existing MLX approaches rely on local model interpretation methods and require strong model smoothing to align model and human explanations, leading to sub-optimal performance. We recast MLX as a robustness problem, where human explanations specify a lower dimensional manifold from which perturbations can be drawn, and show both theoretically and empirically how this approach alleviates the need for strong model smoothing. We consider various approaches to achieving robustness, leading to improved performance over prior MLX methods. Finally, we show how to combine robustness with an earlier MLX method, yielding state-of-the-art results on both synthetic and real-world benchmarks.
|
Use perturbations when learning from explanations
|
[
"Juyeon Heo",
"Vihari Piratla",
"Matthew Robert Wicker",
"Adrian Weller"
] |
Conference
|
poster
|
2303.06419
|
[
"https://github.com/vihari/robust_mlx"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gsi9lJ3994
|
@inproceedings{
li2023nvfi,
title={{NVF}i: Neural Velocity Fields for 3D Physics Learning from Dynamic Videos},
author={Jinxi Li and Ziyang Song and Bo Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gsi9lJ3994}
}
|
In this paper, we aim to model 3D scene dynamics from multi-view videos. Unlike the majority of existing works which usually focus on the common task of novel view synthesis within the training time period, we propose to simultaneously learn the geometry, appearance, and physical velocity of 3D scenes only from video frames, such that multiple desirable applications can be supported, including future frame extrapolation, unsupervised 3D semantic scene decomposition, and dynamic motion transfer. Our method consists of three major components, 1) the keyframe dynamic radiance field, 2) the interframe velocity field, and 3) a joint keyframe and interframe optimization module which is the core of our framework to effectively train both networks. To validate our method, we further introduce two dynamic 3D datasets: 1) Dynamic Object dataset, and 2) Dynamic Indoor Scene dataset. We conduct extensive experiments on multiple datasets, demonstrating the superior performance of our method over all baselines, particularly in the critical tasks of future frame extrapolation and unsupervised 3D semantic scene decomposition.
|
NVFi: Neural Velocity Fields for 3D Physics Learning from Dynamic Videos
|
[
"Jinxi Li",
"Ziyang Song",
"Bo Yang"
] |
Conference
|
poster
|
2312.06398
|
[
"https://github.com/vlar-group/nvfi"
] |
https://huggingface.co/papers/2312.06398
| 0 | 1 | 0 | 3 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=gsglrhvQxX
|
@inproceedings{
yu2023flowbased,
title={Flow-Based Feature Fusion for Vehicle-Infrastructure Cooperative 3D Object Detection},
author={Haibao Yu and Yingjuan Tang and Enze Xie and Jilei Mao and Ping Luo and Zaiqing Nie},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gsglrhvQxX}
}
|
Cooperatively utilizing both ego-vehicle and infrastructure sensor data can significantly enhance autonomous driving perception abilities. However, the uncertain temporal asynchrony and limited communication conditions that are present in traffic environments can lead to fusion misalignment and constrain the exploitation of infrastructure data. To address these issues in vehicle-infrastructure cooperative 3D (VIC3D) object detection, we propose the Feature Flow Net (FFNet), a novel cooperative detection framework. FFNet is a flow-based feature fusion framework that uses a feature flow prediction module to predict future features and compensate for asynchrony. Instead of transmitting feature maps extracted from still-images, FFNet transmits feature flow, leveraging the temporal coherence of sequential infrastructure frames. Furthermore, we introduce a self-supervised training approach that enables FFNet to generate feature flow with feature prediction ability from raw infrastructure sequences. Experimental results demonstrate that our proposed method outperforms existing cooperative detection methods while only requiring about 1/100 of the transmission cost of raw data and covers all latency in one model on the DAIR-V2X dataset. The code is available https://github.com/haibao-yu/FFNet-VIC3D.
|
Flow-Based Feature Fusion for Vehicle-Infrastructure Cooperative 3D Object Detection
|
[
"Haibao Yu",
"Yingjuan Tang",
"Enze Xie",
"Jilei Mao",
"Ping Luo",
"Zaiqing Nie"
] |
Conference
|
poster
|
2311.01682
|
[
"https://github.com/haibao-yu/ffnet-vic3d"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gq4xkwQZ1l
|
@inproceedings{
tewari2023diffusion,
title={Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision},
author={Ayush Tewari and Tianwei Yin and George Cazenavette and Semon Rezchikov and Joshua B. Tenenbaum and Fredo Durand and William T. Freeman and Vincent Sitzmann},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gq4xkwQZ1l}
}
|
Denoising diffusion models are a powerful type of generative models used to capture complex distributions of real-world signals. However, their applicability is limited to scenarios where training samples are readily available, which is not always the case in real-world applications. For example, in inverse graphics, the goal is to generate samples from a distribution of 3D scenes that align with a given image, but ground-truth 3D scenes are unavailable and only 2D images are accessible. To address this limitation, we propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed. Instead, these signals are measured indirectly through a known differentiable forward model, which produces partial observations of the unknown signal. Our approach involves integrating the forward model directly into the denoising process. A key contribution of our work is the integration of a differentiable forward model into the denoising process. This integration effectively connects the generative modeling of observations with the generative modeling of the underlying signals, allowing for end-to-end training of a conditional generative model over signals. During inference, our approach enables sampling from the distribution of underlying signals that are consistent with a given partial observation. We demonstrate the effectiveness of our method on three challenging computer vision tasks. For instance, in the context of inverse graphics, our model enables direct sampling from the distribution of 3D scenes that align with a single 2D input image.
|
Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision
|
[
"Ayush Tewari",
"Tianwei Yin",
"George Cazenavette",
"Semon Rezchikov",
"Joshua B. Tenenbaum",
"Fredo Durand",
"William T. Freeman",
"Vincent Sitzmann"
] |
Conference
|
spotlight
|
2306.11719
|
[
""
] |
https://huggingface.co/papers/2306.11719
| 4 | 7 | 1 | 8 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=gpyeRyc858
|
@inproceedings{
kim2023neural,
title={Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data},
author={Jang-Hyun Kim and Sangdoo Yun and Hyun Oh Song},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gpyeRyc858}
}
|
Diagnosing and cleaning data is a crucial step for building robust machine learning systems. However, identifying problems within large-scale datasets with real-world distributions is challenging due to the presence of complex issues such as label errors, under-representation, and outliers. In this paper, we propose a unified approach for identifying the problematic data by utilizing a largely ignored source of information: a relational structure of data in the feature-embedded space. To this end, we present scalable and effective algorithms for detecting label errors and outlier data based on the relational graph structure of data. We further introduce a visualization tool that provides contextual information of a data point in the feature-embedded space, serving as an effective tool for interactively diagnosing data. We evaluate the label error and outlier/out-of-distribution (OOD) detection performances of our approach on the large-scale image, speech, and language domain tasks, including ImageNet, ESC-50, and SST2. Our approach achieves state-of-the-art detection performance on all tasks considered and demonstrates its effectiveness in debugging large-scale real-world datasets across various domains. We release codes at https://github.com/snu-mllab/Neural-Relation-Graph.
|
Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data
|
[
"Jang-Hyun Kim",
"Sangdoo Yun",
"Hyun Oh Song"
] |
Conference
|
poster
|
2301.12321
|
[
"https://github.com/snu-mllab/neural-relation-graph"
] |
https://huggingface.co/papers/2301.12321
| 0 | 1 | 0 | 3 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=gpqBGyKeKH
|
@inproceedings{
wang2023spectral,
title={Spectral Evolution and Invariance in Linear-width Neural Networks},
author={Zhichao Wang and Andrew William Engel and Anand Sarwate and Ioana Dumitriu and Tony Chiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gpqBGyKeKH}
}
|
We investigate the spectral properties of linear-width feed-forward neural networks, where the sample size is asymptotically proportional to network width. Empirically, we show that the spectra of weight in this high dimensional regime are invariant when trained by gradient descent for small constant learning rates; we provide a theoretical justification for this observation and prove the invariance of the bulk spectra for both conjugate and neural tangent kernels. We demonstrate similar characteristics when training with stochastic gradient descent with small learning rates. When the learning rate is large, we exhibit the emergence of an outlier whose corresponding eigenvector is aligned with the training data structure. We also show that after adaptive gradient training, where a lower test error and feature learning emerge, both weight and kernel matrices exhibit heavy tail behavior. Simple examples are provided to explain when heavy tails can have better generalizations. We exhibit different spectral properties such as invariant bulk, spike, and heavy-tailed distribution from a two-layer neural network using different training strategies, and then correlate them to the feature learning. Analogous phenomena also appear when we train conventional neural networks with real-world data. We conclude that monitoring the evolution of the spectra during training is an essential step toward understanding the training dynamics and feature learning.
|
Spectral Evolution and Invariance in Linear-width Neural Networks
|
[
"Zhichao Wang",
"Andrew William Engel",
"Anand Sarwate",
"Ioana Dumitriu",
"Tony Chiang"
] |
Conference
|
poster
|
2211.06506
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gpJw8f4tIU
|
@inproceedings{
sun2023contrastive,
title={Contrastive Retrospection: honing in on critical steps for rapid learning and generalization in {RL}},
author={Chen Sun and Wannan Yang and Thomas Jiralerspong and Dane Malenfant and Benjamin Alsbury-Nealy and Yoshua Bengio and Blake Aaron Richards},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gpJw8f4tIU}
}
|
In real life, success is often contingent upon multiple critical steps that are distant in time from each other and from the final reward. These critical steps are challenging to identify with traditional reinforcement learning (RL) methods that rely on the Bellman equation for credit assignment. Here, we present a new RL algorithm that uses offline contrastive learning to hone in on these critical steps. This algorithm, which we call Contrastive Retrospection (ConSpec), can be added to any existing RL algorithm. ConSpec learns a set of prototypes for the critical steps in a task by a novel contrastive loss and delivers an intrinsic reward when the current state matches one of the prototypes. The prototypes in ConSpec provide two key benefits for credit assignment: (i) They enable rapid identification of all the critical steps. (ii) They do so in a readily interpretable manner, enabling out-of-distribution generalization when sensory features are altered. Distinct from other contemporary RL approaches to credit assignment, ConSpec takes advantage of the fact that it is easier to retrospectively identify the small set of steps that success is contingent upon (and ignoring other states) than it is to prospectively predict reward at every taken step. ConSpec greatly improves learning in a diverse set of RL tasks. The code is available at the link: https://github.com/sunchipsster1/ConSpec
|
Contrastive Retrospection: honing in on critical steps for rapid learning and generalization in RL
|
[
"Chen Sun",
"Wannan Yang",
"Thomas Jiralerspong",
"Dane Malenfant",
"Benjamin Alsbury-Nealy",
"Yoshua Bengio",
"Blake Aaron Richards"
] |
Conference
|
poster
|
2210.05845
|
[
"https://github.com/sunchipsster1/conspec"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gmmXyAq8TI
|
@inproceedings{
zhang2023coop,
title={Coop: Memory is not a Commodity},
author={Jianhao Zhang and Shihan Ma and Peihong Liu and Jinhui Yuan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gmmXyAq8TI}
}
|
Tensor rematerialization allows the training of deep neural networks (DNNs) under limited memory budgets by checkpointing the models and recomputing the evicted tensors as needed. However, the existing tensor rematerialization techniques overlook the memory system in deep learning frameworks and implicitly assume that free memory blocks at different addresses are identical. Under this flawed assumption, discontiguous tensors are evicted, among which some are not used to allocate the new tensor. This leads to severe memory fragmentation and increases the cost of potential rematerializations.
To address this issue, we propose to evict tensors within a sliding window to ensure all evictions are contiguous and are immediately used. Furthermore, we proposed cheap tensor partitioning and recomputable in-place to further reduce the rematerialization cost by optimizing the tensor allocation.
We named our method Coop as it is a co-optimization of tensor allocation and tensor rematerialization. We evaluated Coop on eight representative DNNs. The experimental results demonstrate that Coop achieves up to $2\times$ memory saving and hugely reduces compute overhead, search latency, and memory fragmentation compared to the state-of-the-art baselines.
|
Coop: Memory is not a Commodity
|
[
"Jianhao Zhang",
"Shihan Ma",
"Peihong Liu",
"Jinhui Yuan"
] |
Conference
|
spotlight
|
2311.00591
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gmVoaAxB1R
|
@inproceedings{
yarotsky2023structure,
title={Structure of universal formulas},
author={Dmitry Yarotsky},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gmVoaAxB1R}
}
|
By universal formulas we understand parameterized analytic expressions that have a fixed complexity, but nevertheless can approximate any continuous function on a compact set. There exist various examples of such formulas, including some in the form of neural networks. In this paper we analyze the essential structural elements of these highly expressive models. We introduce a hierarchy of expressiveness classes connecting the global approximability property to the weaker property of infinite VC dimension, and prove a series of classification results for several increasingly complex functional families. In particular, we introduce a general family of polynomially-exponentially-algebraic functions that, as we prove, is subject to polynomial constraints. As a consequence, we show that fixed-size neural networks with not more than one layer of neurons having transcendental activations (e.g., sine or standard sigmoid) cannot in general approximate functions on arbitrary finite sets. On the other hand, we give examples of functional families, including two-hidden-layer neural networks, that approximate functions on arbitrary finite sets, but fail to do that on the whole domain of definition.
|
Structure of universal formulas
|
[
"Dmitry Yarotsky"
] |
Conference
|
poster
|
2311.03910
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gjBk6IQofa
|
@inproceedings{
evans2023creating,
title={Creating Multi-Level Skill Hierarchies in Reinforcement Learning},
author={Joshua Benjamin Evans and {\"O}zg{\"u}r {\c{S}}im{\c{s}}ek},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gjBk6IQofa}
}
|
What is a useful skill hierarchy for an autonomous agent? We propose an answer based on a graphical representation of how the interaction between an agent and its environment may unfold. Our approach uses modularity maximisation as a central organising principle to expose the structure of the interaction graph at multiple levels of abstraction. The result is a collection of skills that operate at varying time scales, organised into a hierarchy, where skills that operate over longer time scales are composed of skills that operate over shorter time scales. The entire skill hierarchy is generated automatically, with no human input, including the skills themselves (their behaviour, when they can be called, and when they terminate) as well as the dependency structure between them. In a wide range of environments, this approach generates skill hierarchies that are intuitively appealing and that considerably improve the learning performance of the agent.
|
Creating Multi-Level Skill Hierarchies in Reinforcement Learning
|
[
"Joshua Benjamin Evans",
"Özgür Şimşek"
] |
Conference
|
poster
|
2306.09980
|
[
"https://github.com/bath-reinforcement-learning-lab/louvain-skills-neurips-2023"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ginTcBUnL8
|
@inproceedings{
dong2023simmtm,
title={Sim{MTM}: A Simple Pre-Training Framework for Masked Time-Series Modeling},
author={Jiaxiang Dong and Haixu Wu and Haoran Zhang and Li Zhang and Jianmin Wang and Mingsheng Long},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ginTcBUnL8}
}
|
Time series analysis is widely used in extensive areas. Recently, to reduce labeling expenses and benefit various tasks, self-supervised pre-training has attracted immense interest. One mainstream paradigm is masked modeling, which successfully pre-trains deep models by learning to reconstruct the masked content based on the unmasked part. However, since the semantic information of time series is mainly contained in temporal variations, the standard way of randomly masking a portion of time points will seriously ruin vital temporal variations of time series, making the reconstruction task too difficult to guide representation learning. We thus present SimMTM, a Simple pre-training framework for Masked Time-series Modeling. By relating masked modeling to manifold learning, SimMTM proposes to recover masked time points by the weighted aggregation of multiple neighbors outside the manifold, which eases the reconstruction task by assembling ruined but complementary temporal variations from multiple masked series. SimMTM further learns to uncover the local structure of the manifold, which is helpful for masked modeling. Experimentally, SimMTM achieves state-of-the-art fine-tuning performance compared to the most advanced time series pre-training methods in two canonical time series analysis tasks: forecasting and classification, covering both in- and cross-domain settings.
|
SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling
|
[
"Jiaxiang Dong",
"Haixu Wu",
"Haoran Zhang",
"Li Zhang",
"Jianmin Wang",
"Mingsheng Long"
] |
Conference
|
spotlight
|
2302.00861
|
[
"https://github.com/thuml/simmtm"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ghzEUGfRMD
|
@inproceedings{
kadra2023scaling,
title={Scaling Laws for Hyperparameter Optimization},
author={Arlind Kadra and Maciej Janowski and Martin Wistuba and Josif Grabocka},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ghzEUGfRMD}
}
|
Hyperparameter optimization is an important subfield of machine learning that focuses on tuning the hyperparameters of a chosen algorithm to achieve peak performance. Recently, there has been a stream of methods that tackle the issue of hyperparameter optimization, however, most of the methods do not exploit the dominant power law nature of learning curves for Bayesian optimization. In this work, we propose Deep Power Laws (DPL), an ensemble of neural network models conditioned to yield predictions that follow a power-law scaling pattern. Our method dynamically decides which configurations to pause and train incrementally by making use of gray-box evaluations. We compare our method against 7 state-of-the-art competitors on 3 benchmarks related to tabular, image, and NLP datasets covering 59 diverse tasks. Our method achieves the best results across all benchmarks by obtaining the best any-time results compared to all competitors.
|
Scaling Laws for Hyperparameter Optimization
|
[
"Arlind Kadra",
"Maciej Janowski",
"Martin Wistuba",
"Josif Grabocka"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=ghIBaprxsV
|
@inproceedings{
yu2023hierarchical,
title={Hierarchical Semi-Implicit Variational Inference with Application to Diffusion Model Acceleration},
author={Longlin Yu and Tianyu Xie and Yu Zhu and Tong Yang and Xiangyu Zhang and Cheng Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ghIBaprxsV}
}
|
Semi-implicit variational inference (SIVI) has been introduced to expand the analytical variational families by defining expressive semi-implicit distributions in a hierarchical manner. However, the single-layer architecture commonly used in current SIVI methods can be insufficient when the target posterior has complicated structures. In this paper, we propose hierarchical semi-implicit variational inference, called HSIVI, which generalizes SIVI to allow more expressive multi-layer construction of semi-implicit distributions. By introducing auxiliary distributions that interpolate between a simple base distribution and the target distribution, the conditional layers can be trained by progressively matching these auxiliary distributions one layer after another. Moreover, given pre-trained score networks, HSIVI can be used to accelerate the sampling process of diffusion models with the score matching objective. We show that HSIVI significantly enhances the expressiveness of SIVI on several Bayesian inference problems with complicated target distributions. When used for diffusion model acceleration, we show that HSIVI can produce high quality samples comparable to or better than the existing fast diffusion model based samplers with a small number of function evaluations on various datasets.
|
Hierarchical Semi-Implicit Variational Inference with Application to Diffusion Model Acceleration
|
[
"Longlin Yu",
"Tianyu Xie",
"Yu Zhu",
"Tong Yang",
"Xiangyu Zhang",
"Cheng Zhang"
] |
Conference
|
poster
|
2310.17153
|
[
"https://github.com/longinyu/hsivi"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gh9JNeqjzo
|
@inproceedings{
fang2023reducing,
title={Reducing Shape-Radiance Ambiguity in Radiance Fields with a Closed-Form Color Estimation Method},
author={Qihang Fang and Yafei Song and Keqiang Li and Liefeng Bo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gh9JNeqjzo}
}
|
A neural radiance field (NeRF) enables the synthesis of cutting-edge realistic novel view images of a 3D scene. It includes density and color fields to model the shape and radiance of a scene, respectively. Supervised by the photometric loss in an end-to-end training manner, NeRF inherently suffers from the shape-radiance ambiguity problem, i.e., it can perfectly fit training views but does not guarantee decoupling the two fields correctly. To deal with this issue, existing works have incorporated prior knowledge to provide an independent supervision signal for the density field, including total variation loss, sparsity loss, distortion loss, etc. These losses are based on general assumptions about the density field, e.g., it should be smooth, sparse, or compact, which are not adaptive to a specific scene. In this paper, we propose a more adaptive method to reduce the shape-radiance ambiguity. The key is a rendering method that is only based on the density field. Specifically, we first estimate the color field based on the density field and posed images in a closed form. Then NeRF's rendering process can proceed. We address the problems in estimating the color field, including occlusion and non-uniformly distributed views. Afterwards, it is applied to regularize NeRF's density field. As our regularization is guided by photometric loss, it is more adaptive compared to existing ones. Experimental results show that our method improves the density field of NeRF both qualitatively and quantitatively. Our code is available at https://github.com/qihangGH/Closed-form-color-field.
|
Reducing Shape-Radiance Ambiguity in Radiance Fields with a Closed-Form Color Estimation Method
|
[
"Qihang Fang",
"Yafei Song",
"Keqiang Li",
"Liefeng Bo"
] |
Conference
|
poster
|
2312.12726
|
[
"https://github.com/qihanggh/closed-form-color-field"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gf5xJVQS5p
|
@inproceedings{
li2023learning,
title={Learning to Configure Separators in Branch-and-Cut},
author={Sirui Li and Wenbin Ouyang and Max B. Paulus and Cathy Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gf5xJVQS5p}
}
|
Cutting planes are crucial in solving mixed integer linear programs (MILP) as they facilitate bound improvements on the optimal solution. Modern MILP solvers rely on a variety of separators to generate a diverse set of cutting planes by invoking the separators frequently during the solving process. This work identifies that MILP solvers can be drastically accelerated by appropriately selecting separators to activate. As the combinatorial separator selection space imposes challenges for machine learning, we *learn to separate* by proposing a novel data-driven strategy to restrict the selection space and a learning-guided algorithm on the restricted space. Our method predicts instance-aware separator configurations which can dynamically adapt during the solve, effectively accelerating the open source MILP solver SCIP by improving the relative solve time up to 72% and 37% on synthetic and real-world MILP benchmarks. Our work complements recent work on learning to select cutting planes and highlights the importance of separator management.
|
Learning to Configure Separators in Branch-and-Cut
|
[
"Sirui Li",
"Wenbin Ouyang",
"Max B. Paulus",
"Cathy Wu"
] |
Conference
|
poster
|
2311.05650
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gevmGxsTSI
|
@inproceedings{
yuan2023learning,
title={Learning From Biased Soft Labels},
author={Hua Yuan and Yu Shi and Ning Xu and Xu Yang and Xin Geng and Yong Rui},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gevmGxsTSI}
}
|
Since the advent of knowledge distillation, many researchers have been intrigued by the $\textit{dark knowledge}$ hidden in the soft labels generated by the teacher model. This prompts us to scrutinize the circumstances under which these soft labels are effective. Predominant existing theories implicitly require that the soft labels are close to the ground-truth labels. In this paper, however, we investigate whether biased soft labels are still effective. Here, bias refers to the discrepancy between the soft labels and the ground-truth labels. We present two indicators to measure the effectiveness of the soft labels. Based on the two indicators, we propose moderate conditions to ensure that, the biased soft label learning problem is both $\textit{classifier-consistent}$ and $\textit{Empirical Risk Minimization}$ (ERM) $\textit{learnable}$, which can be applicable even for large-biased soft labels. We further design a heuristic method to train Skillful but Bad Teachers (SBTs), and these teachers with accuracy less than 30\% can teach students to achieve accuracy over 90\% on CIFAR-10, which is comparable to models trained on the original data. The proposed indicators adequately measure the effectiveness of the soft labels generated in this process. Moreover, our theoretical framework can be adapted to elucidate the effectiveness of soft labels in three weakly-supervised learning paradigms, namely incomplete supervision, partial label learning and learning with additive noise. Experimental results demonstrate that our indicators can measure the effectiveness of biased soft labels generated by teachers or in these weakly-supervised learning paradigms.
|
Learning From Biased Soft Labels
|
[
"Hua Yuan",
"Yu Shi",
"Ning Xu",
"Xu Yang",
"Xin Geng",
"Yong Rui"
] |
Conference
|
poster
|
2302.08155
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=geLARFEK8O
|
@inproceedings{
zhou2023combating,
title={Combating Representation Learning Disparity with Geometric Harmonization},
author={Zhihan Zhou and Jiangchao Yao and Feng Hong and Ya Zhang and Bo Han and Yanfeng Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=geLARFEK8O}
}
|
Self-supervised learning (SSL) as an effective paradigm of representation learning has achieved tremendous success on various curated datasets in diverse scenarios. Nevertheless, when facing the long-tailed distribution in real-world applications, it is still hard for existing methods to capture transferable and robust representation. The attribution is that the vanilla SSL methods that pursue the sample-level uniformity easily leads to representation learning disparity, where head classes with the huge sample number dominate the feature regime but tail classes with the small sample number passively collapse. To address this problem, we propose a novel Geometric Harmonization (GH) method to encourage the category-level uniformity in representation learning, which is more benign to the minority and almost does not hurt the majority under long-tailed distribution. Specially, GH measures the population statistics of the embedding space on top of self-supervised learning, and then infer an fine-grained instance-wise calibration to constrain the space expansion of head classes and avoid the passive collapse of tail classes. Our proposal does not alter the setting of SSL and can be easily integrated into existing methods in a low-cost manner. Extensive results on a range of benchmark datasets show the effectiveness of \methodspace with high tolerance to the distribution skewness.
|
Combating Representation Learning Disparity with Geometric Harmonization
|
[
"Zhihan Zhou",
"Jiangchao Yao",
"Feng Hong",
"Ya Zhang",
"Bo Han",
"Yanfeng Wang"
] |
Conference
|
spotlight
|
2310.17622
|
[
"https://github.com/MediaBrain-SJTU/Geometric-Harmonization"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gdzxWGGxWE
|
@inproceedings{
zeno2023how,
title={How do Minimum-Norm Shallow Denoisers Look in Function Space?},
author={Chen Zeno and Greg Ongie and Yaniv Blumenfeld and Nir Weinberger and Daniel Soudry},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gdzxWGGxWE}
}
|
Neural network (NN) denoisers are an essential building block in many common tasks, ranging from image reconstruction to image generation. However, the success of these models is not well understood from a theoretical perspective. In this paper, we aim to characterize the functions realized by shallow ReLU NN denoisers --- in the common theoretical setting of interpolation (i.e., zero training loss) with a minimal representation cost (i.e., minimal $\ell^2$ norm weights). First, for univariate data, we derive a closed form for the NN denoiser function, find it is contractive toward the clean data points, and prove it generalizes better than the empirical MMSE estimator at a low noise level. Next, for multivariate data, we find the NN denoiser functions in a closed form under various geometric assumptions on the training data: data contained in a low-dimensional subspace, data contained in a union of one-sided rays, or several types of simplexes. These functions decompose into a sum of simple rank-one piecewise linear interpolations aligned with edges and/or faces connecting training samples.
We empirically verify this alignment phenomenon on synthetic data and real images.
|
How do Minimum-Norm Shallow Denoisers Look in Function Space?
|
[
"Chen Zeno",
"Greg Ongie",
"Yaniv Blumenfeld",
"Nir Weinberger",
"Daniel Soudry"
] |
Conference
|
poster
|
2311.06748
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gdwcoBCMVi
|
@inproceedings{
gong2023xtrimogene,
title={xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell {RNA}-Seq Data},
author={Jing Gong and Minsheng Hao and Xingyi Cheng and Xin Zeng and Chiming Liu and Jianzhu Ma and Xuegong Zhang and Taifeng Wang and Le Song},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gdwcoBCMVi}
}
|
Advances in high-throughput sequencing technology have led to significant progress in measuring gene expressions at the single-cell level. The amount of publicly available single-cell RNA-seq (scRNA-seq) data is already surpassing 50M records for humans with each record measuring 20,000 genes. This highlights the need for unsupervised representation learning to fully ingest these data, yet classical transformer architectures are prohibitive to train on such data in terms of both computation and memory. To address this challenge, we propose a novel asymmetric encoder-decoder transformer for scRNA-seq data, called xTrimoGene$^\alpha$ (or xTrimoGene for short), which leverages the sparse characteristic of the data to scale up the pre-training. This scalable design of xTrimoGene reduces FLOPs by one to two orders of magnitude compared to classical transformers while maintaining high accuracy, enabling us to train the largest transformer models over the largest scRNA-seq dataset today. Our experiments also show that the performance of xTrimoGene improves as we scale up the model sizes, and it also leads to SOTA performance over various downstream tasks, such as cell type annotation, perturb-seq effect prediction, and drug combination prediction.
xTrimoGene model is now available for use as a service via the following link: https://api.biomap.com/xTrimoGene/apply.
|
xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data
|
[
"Jing Gong",
"Minsheng Hao",
"Xingyi Cheng",
"Xin Zeng",
"Chiming Liu",
"Jianzhu Ma",
"Xuegong Zhang",
"Taifeng Wang",
"Le Song"
] |
Conference
|
poster
|
2311.15156
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gdVcFOvxT3
|
@inproceedings{
cohen2023finding,
title={Finding Safe Zones of Markov Decision Processes Policies},
author={Lee Cohen and Yishay Mansour and Michal Moshkovitz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gdVcFOvxT3}
}
|
Given a policy of a Markov Decision Process, we define a SafeZone as a subset of states, such that most of the policy's trajectories are confined to this subset. The quality of a SafeZone is parameterized by the number of states and the escape probability, i.e., the probability that a random trajectory will leave the subset. SafeZones are especially interesting when they have a small number of states and low escape probability. We study the complexity of finding optimal SafeZones, and show that in general, the problem is computationally hard. For this reason, we concentrate on finding approximate SafeZones. Our main result is a bi-criteria approximation learning algorithm with a factor of almost $2$ approximation for both the escape probability and \newprob size, using a polynomial size sample complexity.
|
Finding Safe Zones of Markov Decision Processes Policies
|
[
"Lee Cohen",
"Yishay Mansour",
"Michal Moshkovitz"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gd20oaZqqF
|
@inproceedings{
zhu2023towards,
title={Towards Optimal Caching and Model Selection for Large Model Inference},
author={Banghua Zhu and Ying Sheng and Lianmin Zheng and Clark Barrett and Michael Jordan and Jiantao Jiao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gd20oaZqqF}
}
|
Large Language Models (LLMs) and other large foundation models have achieved impressive results, but their size exacerbates existing resource consumption and latency challenges. In particular, the large-scale deployment of these models is hindered by the significant resource requirements during inference. In this paper, we study two approaches for mitigating these challenges: employing a cache to store previous queries and learning a model selector to choose from an ensemble of models for query processing.
Theoretically, we provide an optimal algorithm for jointly optimizing both approaches to reduce the inference cost in both offline and online tabular settings.
By combining a caching algorithm, namely Greedy Dual Size with Frequency (GDSF) or Least Expected Cost (LEC), with a model selector, we achieve optimal rates in both offline and online settings. Empirically, simulations show that our caching and model selection algorithm greatly improves over the baselines, with up to $50\times$ improvement over the baseline when the ratio between the maximum cost and minimum cost is $100$. Experiments on real datasets show a $4.3\times$ improvement in FLOPs over the baseline when the ratio for FLOPs is $10$, and a $1.8\times$ improvement in latency when the ratio for average latency is $1.85$.
|
Towards Optimal Caching and Model Selection for Large Model Inference
|
[
"Banghua Zhu",
"Ying Sheng",
"Lianmin Zheng",
"Clark Barrett",
"Michael Jordan",
"Jiantao Jiao"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gbhixjg2dX
|
@inproceedings{
agarwal2023synthetic,
title={Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions},
author={Abhineet Agarwal and Anish Agarwal and Suhas Vijaykumar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gbhixjg2dX}
}
|
We consider a setting where there are $N$ heterogeneous units and $p$ interventions. Our goal is to learn unit-specific potential outcomes for any combination of these $p$ interventions, i.e., $N \times 2^p$ causal parameters. Choosing a combination of interventions is a problem that naturally arises in a variety of applications such as factorial design experiments and recommendation engines (e.g., showing a set of movies that maximizes engagement for a given user). Running $N \times 2^p$ experiments to estimate the various parameters is likely expensive and/or infeasible as $N$ and $p$ grow. Further, with observational data there is likely confounding, i.e., whether or not a unit is seen under a combination is correlated with its potential outcome under that combination. We study this problem under a novel model that imposes latent structure across both units and combinations of interventions. Specifically, we assume latent similarity in potential outcomes across units (i.e., the matrix of potential outcomes is approximately rank $r$) and regularity in how combinations of interventions interact (i.e., the coefficients in the Fourier expansion of the potential outcomes is approximately $s$ sparse). We establish identification for all $N \times 2^p$ parameters despite unobserved confounding. We propose an estimation procedure, Synthetic Combinations, and establish finite-sample consistency under precise conditions on the observation pattern. We show that Synthetic Combinations is able to consistently estimate unit-specific potential outcomes given a total of $\text{poly}(r) \times \left( N + s^2p\right)$ observations. In comparison, previous methods that do not exploit structure across both units and combinations have poorer sample complexity scaling as $\min(N \times s^2p, \ \ r \times (N + 2^p))$.
|
Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions
|
[
"Abhineet Agarwal",
"Anish Agarwal",
"Suhas Vijaykumar"
] |
Conference
|
poster
|
2303.14226
|
[
"https://github.com/aagarwal1996/synth_combo"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gbOukzirpK
|
@inproceedings{
jiang2023objectcentric,
title={Object-Centric Slot Diffusion},
author={Jindong Jiang and Fei Deng and Gautam Singh and Sungjin Ahn},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gbOukzirpK}
}
|
The recent success of transformer-based image generative models in object-centric learning highlights the importance of powerful image generators for handling complex scenes. However, despite the high expressiveness of diffusion models in image generation, their integration into object-centric learning remains largely unexplored in this domain. In this paper, we explore the feasibility and potential of integrating diffusion models into object-centric learning and investigate the pros and cons of this approach. We introduce Latent Slot Diffusion (LSD), a novel model that serves dual purposes: it is the first object-centric learning model to replace conventional slot decoders with a latent diffusion model conditioned on object slots, and it is also the first unsupervised compositional conditional diffusion model that operates without the need for supervised annotations like text. Through experiments on various object-centric tasks, including the first application of the FFHQ dataset in this field, we demonstrate that LSD significantly outperforms state-of-the-art transformer-based decoders, particularly in more complex scenes, and exhibits superior unsupervised compositional generation quality. In addition, we conduct a preliminary investigation into the integration of pre-trained diffusion models in LSD and demonstrate its effectiveness in real-world image segmentation and generation. Project page is available at https://latentslotdiffusion.github.io
|
Object-Centric Slot Diffusion
|
[
"Jindong Jiang",
"Fei Deng",
"Gautam Singh",
"Sungjin Ahn"
] |
Conference
|
spotlight
|
2303.10834
|
[
"https://github.com/jindongjiang/latent-slot-diffusion"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ganlU27uvj
|
@inproceedings{
qi2023slotguided,
title={Slot-guided Volumetric Object Radiance Fields},
author={DI QI and Tong Yang and Xiangyu Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ganlU27uvj}
}
|
We present a novel framework for 3D object-centric representation learning. Our approach effectively decomposes complex scenes into individual objects from a single image in an unsupervised fashion. This method, called \underline{s}lot-guided \underline{V}olumetric \underline{O}bject \underline{R}adiance \underline{F}ields~(sVORF), composes volumetric object radiance fields with object slots as a guidance to implement unsupervised 3D scene decomposition. Specifically, sVORF obtains object slots from a single image via a transformer module, maps these slots to volumetric object radiance fields with a hypernetwork and composes object radiance fields with the guidance of object slots at a 3D location. Moreover, sVORF significantly reduces memory requirement due to small-sized pixel rendering during training. We demonstrate the effectiveness of our approach by showing top results in scene decomposition and generation tasks of complex synthetic datasets (e.g., Room-Diverse). Furthermore, we also confirm the potential of sVORF to segment objects in real-world scenes (e.g., the LLFF dataset). We hope our approach can provide preliminary understanding of the physical world and help ease future research in 3D object-centric representation learning.
|
Slot-guided Volumetric Object Radiance Fields
|
[
"DI QI",
"Tong Yang",
"Xiangyu Zhang"
] |
Conference
|
poster
|
2401.02241
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gaktiSjatl
|
@inproceedings{
xu2023semiimplicit,
title={Semi-Implicit Denoising Diffusion Models ({SIDDM}s)},
author={yanwu xu and Mingming Gong and Shaoan Xie and Wei Wei and Matthias Grundmann and kayhan Batmanghelich and Tingbo Hou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gaktiSjatl}
}
|
Despite the proliferation of generative models, achieving fast sampling during inference without compromising sample diversity and quality remains challenging. Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps. The Denoising Diffusion Generative Adversarial Networks (DDGAN) attempted to circumvent this limitation by integrating a GAN model for larger jumps in the diffusion process. However, DDGAN encountered scalability limitations when applied to large datasets. To address these limitations, we introduce a novel approach that tackles the problem by matching implicit and explicit factors. More specifically, our approach involves utilizing an implicit model to match the marginal distributions of noisy data and the explicit conditional distribution of the forward diffusion. This combination allows us to effectively match the joint denoising distributions. Unlike DDPM but similar to DDGAN, we do not enforce a parametric distribution for the reverse step, enabling us to take large steps during inference. Similar to the DDPM but unlike DDGAN, we take advantage of the exact form of the diffusion process. We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
|
Semi-Implicit Denoising Diffusion Models (SIDDMs)
|
[
"yanwu xu",
"Mingming Gong",
"Shaoan Xie",
"Wei Wei",
"Matthias Grundmann",
"kayhan Batmanghelich",
"Tingbo Hou"
] |
Conference
|
poster
|
2306.12511
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gaXAjtHic2
|
@inproceedings{
wu2023on,
title={On Private and Robust Bandits},
author={Yulian Wu and Xingyu Zhou and Youming Tao and Di Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gaXAjtHic2}
}
|
We study private and robust multi-armed bandits (MABs), where the agent receives Huber's contaminated heavy-tailed rewards and meanwhile needs to ensure differential privacy. We consider both the finite $k$-th raw moment and the finite $k$-th central moment settings for heavy-tailed rewards distributions with $k\ge 2$. We first present its minimax lower bound, characterizing the information-theoretic limit of regret with respect to privacy budget, contamination level, and heavy-tailedness. Then, we propose a meta-algorithm that builds on a private and robust mean estimation sub-routine \texttt{PRM} that essentially relies on reward truncation and the Laplace mechanism. For the above two different heavy-tailed settings, we give corresponding schemes of \texttt{PRM}, which enable us to achieve nearly-optimal regrets. Moreover, our two proposed truncation-based or histogram-based \texttt{PRM} schemes achieve the optimal trade-off between estimation accuracy, privacy and robustness. Finally, we support our theoretical results and show the effectiveness of our algorithms with experimental studies.
|
On Private and Robust Bandits
|
[
"Yulian Wu",
"Xingyu Zhou",
"Youming Tao",
"Di Wang"
] |
Conference
|
poster
|
2302.02526
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gYetLsNO8x
|
@inproceedings{
wang2023best,
title={Best Arm Identification with Fixed Budget: A Large Deviation Perspective},
author={Po-An Wang and Ruo-Chun Tzeng and Alexandre Proutiere},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gYetLsNO8x}
}
|
We consider the problem of identifying the best arm in stochastic Multi-Armed Bandits (MABs) using a fixed sampling budget. Characterizing the minimal instance-specific error probability for this problem constitutes one of the important remaining open problems in MABs. When arms are selected using a static sampling strategy, the error probability decays exponentially with the number of samples at a rate that can be explicitly derived via Large Deviation techniques. Analyzing the performance of algorithms with adaptive sampling strategies is however much more challenging. In this paper, we establish a connection between the Large Deviation Principle (LDP) satisfied by the empirical proportions of arm draws and that satisfied by the empirical arm rewards. This connection holds for any adaptive algorithm, and is leveraged (i) to improve error probability upper bounds of some existing algorithms, such as the celebrated SR (Successive Rejects) algorithm \cite{audibert2010best}, and (ii) to devise and analyze new algorithms. In particular, we present CR (Continuous Rejects), a truly adaptive algorithm that can reject arms in {\it any} round based on the observed empirical gaps between the rewards of various arms. Applying our Large Deviation results, we prove that CR enjoys better performance guarantees than existing algorithms, including SR. Extensive numerical experiments confirm this observation.
|
Best Arm Identification with Fixed Budget: A Large Deviation Perspective
|
[
"Po-An Wang",
"Ruo-Chun Tzeng",
"Alexandre Proutiere"
] |
Conference
|
spotlight
|
2312.12137
|
[
"https://github.com/rctzeng/neurips2023-cr"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gYWjI7wLhc
|
@inproceedings{
vauvelle2023differentiable,
title={Differentiable sorting for censored time-to-event data.},
author={Andre Vauvelle and Benjamin Wild and Roland Eils and Spiros Denaxas},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gYWjI7wLhc}
}
|
Survival analysis is a crucial semi-supervised task in machine learning with significant real-world applications, especially in healthcare. The most common approach to survival analysis, Cox’s partial likelihood, can be interpreted as a ranking model optimized on a lower bound of the concordance index. We follow these connections further, with listwise ranking losses that allow for a relaxation of the pairwise independence assumption. Given the inherent transitivity of ranking, we explore differentiable sorting networks as a means to introduce a stronger transitive inductive bias during optimization. Despite their potential, current differentiable sorting methods cannot account for censoring, a crucial aspect of many real-world datasets. We propose a novel method, Diffsurv, to overcome this limitation by extending differentiable sorting methods to handle censored tasks. Diffsurv predicts matrices of possible permutations that accommodate the label uncertainty introduced by censored samples. Our experiments reveal that Diffsurv outperforms established baselines in various simulated and real-world risk prediction scenarios. Furthermore, we demonstrate the algorithmic advantages of Diffsurv by presenting a novel method for top-k risk prediction that surpasses current methods.
|
Differentiable sorting for censored time-to-event data.
|
[
"Andre Vauvelle",
"Benjamin Wild",
"Roland Eils",
"Spiros Denaxas"
] |
Conference
|
poster
|
[
"https://github.com/andre-vauvelle/diffsurv-ea"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gVLKXT9JwG
|
@inproceedings{
bao2023global,
title={Global Convergence Analysis of Local {SGD} for Two-layer Neural Network without Overparameterization},
author={Yajie Bao and Amarda Shehu and Mingrui Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gVLKXT9JwG}
}
|
Local SGD, a cornerstone algorithm in federated learning, is widely used in training deep neural networks and shown to have strong empirical performance. A theoretical understanding of such performance on nonconvex loss landscapes is currently lacking. Analysis of the global convergence of SGD is challenging, as the noise depends on the model parameters. Indeed, many works narrow their focus to GD and rely on injecting noise to enable convergence to the local or global optimum. When expanding the focus to local SGD, existing analyses in the nonconvex case can only guarantee finding stationary points or assume the neural network is overparameterized so as to guarantee convergence to the global minimum through neural tangent kernel analysis. In this work, we provide the first global convergence analysis of the vanilla local SGD for two-layer neural networks \emph{without overparameterization} and \textit{without injecting noise}, when the input data is Gaussian. The main technical ingredients of our proof are \textit{a self-correction mechanism} and \textit{a new exact recursive characterization of the direction of global model parameters}. The self-correction mechanism guarantees the algorithm reaches a good region even if the initialization is in a bad region. A good (bad) region means updating the model by gradient descent will move closer to (away from) the optimal solution. The main difficulty in establishing a self-correction mechanism is to cope with the gradient dependency between two layers. To address this challenge, we divide the landscape of the objective into several regions to carefully control the interference of two layers during the correction process. As a result, we show that local SGD can correct the two layers and enter the good region in polynomial time. After that, we establish a new exact recursive characterization of the direction of global parameters, which is the key to showing convergence to the global minimum with linear speedup in the number of machines and reduced communication rounds. Experiments on synthetic data confirm theoretical results.
|
Global Convergence Analysis of Local SGD for Two-layer Neural Network without Overparameterization
|
[
"Yajie Bao",
"Amarda Shehu",
"Mingrui Liu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gUlcyeHzw1
|
@inproceedings{
krainovic2023learning,
title={Learning Provably Robust Estimators for Inverse Problems via Jittering},
author={Anselm Krainovic and Mahdi Soltanolkotabi and Reinhard Heckel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gUlcyeHzw1}
}
|
Deep neural networks provide excellent performance for inverse problems such as denoising. However, neural networks can be sensitive to adversarial or worst-case perturbations. This raises the question of whether such networks can be trained efficiently to be worst-case robust. In this paper, we investigate whether jittering, a simple regularization technique that adds isotropic Gaussian noise during training, is effective for learning worst-case robust estimators for inverse problems. While well studied for prediction in classification tasks, the effectiveness of jittering for inverse problems has not been systematically investigated. In this paper, we present a novel analytical characterization of the optimal $\ell_2$-worst-case robust estimator for linear denoising and show that jittering yields optimal robust denoisers. Furthermore, we examine jittering empirically via training deep neural networks (U-nets) for natural image denoising, deconvolution, and accelerated magnetic resonance imaging (MRI). The results show that jittering significantly enhances the worst-case robustness, but can be suboptimal for inverse problems beyond denoising. Moreover, our results imply that training on real data which often contains slight noise is somewhat robustness enhancing.
|
Learning Provably Robust Estimators for Inverse Problems via Jittering
|
[
"Anselm Krainovic",
"Mahdi Soltanolkotabi",
"Reinhard Heckel"
] |
Conference
|
poster
|
2307.12822
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gUTVpByfVX
|
@inproceedings{
prabhudesai2023testtime,
title={Test-time Adaptation of Discriminative Models via Diffusion Generative Feedback},
author={Mihir Prabhudesai and Tsung-Wei Ke and Alexander Cong Li and Deepak Pathak and Katerina Fragkiadaki},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gUTVpByfVX}
}
|
The advancements in generative modeling, particularly the advent of diffusion models, have sparked a fundamental question: how can these models be effectively used for discriminative tasks? In this work, we find that generative models can be great test-time adapters for discriminative models. Our method, Diffusion-TTA, adapts pre-trained discriminative models such as image classifiers, segmenters and depth predictors, to each unlabelled example in the test set using generative feedback from a diffusion model. We achieve this by modulating the conditioning of the diffusion model using the output of the discriminative model. We then maximize the image likelihood objective by backpropagating the gradients to discriminative model’s parameters. We show Diffusion-TTA significantly enhances the accuracy of various large-scale pre-trained discriminative models, such as, ImageNet classifiers, CLIP models, image pixel labellers and image depth predictors. Diffusion-TTA outperforms existing test-time adaptation methods, including TTT-MAE and TENT, and particularly shines in online adaptation setups, where the discriminative model is continually adapted to each example in the test set. We provide access to code, results, and visualizations on our website: diffusion-tta.github.io/
|
Diffusion-TTA: Test-time Adaptation of Discriminative Models via Generative Feedback
|
[
"Mihir Prabhudesai",
"Tsung-Wei Ke",
"Alexander Cong Li",
"Deepak Pathak",
"Katerina Fragkiadaki"
] |
Conference
|
poster
|
2311.16102
|
[
""
] |
https://huggingface.co/papers/2311.16102
| 0 | 0 | 0 | 5 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=gUEekxYr6D
|
@inproceedings{
fan2023bislssps,
title={Bi{SLS}/{SPS}: Auto-tune Step Sizes for Stable Bi-level Optimization},
author={Chen Fan and Gaspard Chon{\'e}-Ducasse and Mark Schmidt and Christos Thrampoulidis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gUEekxYr6D}
}
|
The popularity of bi-level optimization (BO) in deep learning has spurred a growing interest in studying gradient-based BO algorithms.
However, existing algorithms involve two coupled learning rates that can be affected by approximation errors when computing hypergradients, making careful fine-tuning necessary to ensure fast convergence. To alleviate this issue, we investigate the use of recently proposed adaptive step-size methods, namely stochastic line search (SLS) and stochastic Polyak step size (SPS), for computing both the upper and lower-level learning rates. First, we revisit the use of SLS and SPS in single-level optimization without the additional interpolation condition that is typically assumed in prior works. For such settings, we investigate new variants of SLS and SPS that improve upon existing suggestions in the literature and are simpler to implement. Importantly, these two variants can be seen as special instances of general family of methods with an envelope-type step-size. This unified envelope strategy allows for the extension of the algorithms and their convergence guarantees to BO settings. Finally, our extensive experiments demonstrate that the new algorithms, which are available in both SGD and Adam versions, can find large learning rates with minimal tuning and converge faster than corresponding vanilla SGD or Adam BO algorithms that require fine-tuning.
|
BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization
|
[
"Chen Fan",
"Gaspard Choné-Ducasse",
"Mark Schmidt",
"Christos Thrampoulidis"
] |
Conference
|
poster
|
2305.18666
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gThGBHhqcU
|
@inproceedings{
dinh2023rethinking,
title={Rethinking Conditional Diffusion Sampling with Progressive Guidance},
author={Anh-Dung Dinh and Daochang Liu and Chang Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gThGBHhqcU}
}
|
This paper tackles two critical challenges encountered in classifier guidance for diffusion generative models, i.e., the lack of diversity and the presence of adversarial effects. These issues often result in a scarcity of diverse samples or the generation of non-robust features. The underlying cause lies in the mechanism of classifier guidance, where discriminative gradients push samples to be recognized as conditions aggressively. This inadvertently suppresses information with common features among relevant classes, resulting in a limited pool of features with less diversity or the absence of robust features for image construction. We propose a generalized classifier guidance method called Progressive Guidance, which mitigates the problems by allowing relevant classes' gradients to contribute to shared information construction when the image is noisy in early sampling steps. In the later sampling stage, we progressively enhance gradients to refine the details in the image toward the primary condition. This helps to attain a high level of diversity and robustness compared to the vanilla classifier guidance. Experimental results demonstrate that our proposed method further improves the image quality while offering a significant level of diversity as well as robust features.
|
Rethinking Conditional Diffusion Sampling with Progressive Guidance
|
[
"Anh-Dung Dinh",
"Daochang Liu",
"Chang Xu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gQ4h6WvME0
|
@inproceedings{
watkins2023optimistic,
title={Optimistic Rates for Multi-Task Representation Learning},
author={Austin Watkins and Enayat Ullah and Thanh Nguyen-Tang and Raman Arora},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gQ4h6WvME0}
}
|
We study the problem of transfer learning via Multi-Task Representation Learning (MTRL), wherein multiple source tasks are used to learn a good common representation, and a predictor is trained on top of it for the target task. Under standard regularity assumptions on the loss function and task diversity, we provide new statistical rates on the excess risk of the target task, which demonstrate the benefit of representation learning. Importantly, our rates are optimistic, i.e., they interpolate between the standard $O(m^{-1/2})$ rate and the fast $O(m^{-1})$ rate, depending on the difficulty of the learning task, where $m$ is the number of samples for the target task. Besides the main result, we make several new contributions, including giving optimistic rates for excess risk of source tasks (multi-task learning (MTL)), a local Rademacher complexity theorem for MTRL and MTL, as well as a chain rule for local Rademacher complexity for composite predictor classes.
|
Optimistic Rates for Multi-Task Representation Learning
|
[
"Austin Watkins",
"Enayat Ullah",
"Thanh Nguyen-Tang",
"Raman Arora"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gPylY8sCbw
|
@inproceedings{
hazan2023partial,
title={Partial Matrix Completion},
author={Elad Hazan and Adam Tauman Kalai and Varun Kanade and Clara Mohri and Y. Jennifer Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gPylY8sCbw}
}
|
The matrix completion problem involves reconstructing a low-rank matrix by using a given set of revealed (and potentially noisy) entries. Although existing methods address the completion of the entire matrix, the accuracy of the completed entries can vary significantly across the matrix, due to differences in the sampling distribution. For instance, users may rate movies primarily from their country or favorite genres, leading to inaccurate predictions for the majority of completed entries.
We propose a novel formulation of the problem as Partial Matrix Completion, where the objective is to complete a substantial subset of the entries with high confidence. Our algorithm efficiently handles the unknown and arbitrarily complex nature of the sampling distribution, ensuring high accuracy for all completed entries and sufficient coverage across the matrix. Additionally, we introduce an online version of the problem and present a low-regret efficient algorithm based on iterative gradient updates. Finally, we conduct a preliminary empirical evaluation of our methods.
|
Partial Matrix Completion
|
[
"Elad Hazan",
"Adam Tauman Kalai",
"Varun Kanade",
"Clara Mohri",
"Y. Jennifer Sun"
] |
Conference
|
poster
|
2208.12063
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gO60SSGOMy
|
@inproceedings{
chen2023contentbased,
title={Content-based Unrestricted Adversarial Attack},
author={Zhaoyu Chen and Bo Li and Shuang Wu and Kaixun Jiang and Shouhong Ding and Wenqiang Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gO60SSGOMy}
}
|
Unrestricted adversarial attacks typically manipulate the semantic content of an image (e.g., color or texture) to create adversarial examples that are both effective and photorealistic, demonstrating their ability to deceive human perception and deep neural networks with stealth and success. However, current works usually sacrifice unrestricted degrees and subjectively select some image content to guarantee the photorealism of unrestricted adversarial examples, which limits its attack performance. To ensure the photorealism of adversarial examples and boost attack performance, we propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack. By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction. Therefore, within this framework, we implement Adversarial Content Attack (ACA) based on Stable Diffusion and can generate high transferable unrestricted adversarial examples with various adversarial contents. Extensive experimentation and visualization demonstrate the efficacy of ACA, particularly in surpassing state-of-the-art attacks by an average of 13.3-50.4\% and 16.8-48.0\% in normally trained models and defense methods, respectively.
|
Content-based Unrestricted Adversarial Attack
|
[
"Zhaoyu Chen",
"Bo Li",
"Shuang Wu",
"Kaixun Jiang",
"Shouhong Ding",
"Wenqiang Zhang"
] |
Conference
|
poster
|
2305.10665
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gMjIUZBKH8
|
@inproceedings{
regmi2023adavae,
title={Ada{VAE}: Bayesian Structural Adaptation for Variational Autoencoders},
author={Paribesh Regmi and Rui Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gMjIUZBKH8}
}
|
The neural network structures of generative models and their corresponding inference models paired in variational autoencoders (VAEs) play a critical role in the models' generative performance. However, powerful VAE network structures are hand-crafted and fixed prior to training, resulting in a one-size-fits-all approach that requires heavy computation to tune for given data. Moreover, existing VAE regularization methods largely overlook the importance of network structures and fail to prevent overfitting in deep VAE models with cascades of hidden layers. To address these issues, we propose a Bayesian inference framework that automatically adapts VAE network structures to data and prevent overfitting as they grow deeper. We model the number of hidden layers with a beta process to infer the most plausible encoding/decoding network depths warranted by data and perform layer-wise dropout regularization with a conjugate Bernoulli process. We develop a scalable estimator that performs joint inference on both VAE network structures and latent variables. Our experiments show that the inference framework effectively prevents overfitting in both shallow and deep VAE models, yielding state-of-the-art performance. We demonstrate that our framework is compatible with different types of VAE backbone networks and can be applied to various VAE variants, further improving their performance.
|
AdaVAE: Bayesian Structural Adaptation for Variational Autoencoders
|
[
"Paribesh Regmi",
"Rui Li"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gMS6FVZvmF
|
@inproceedings{
zhou2023one,
title={One Fits All: Power General Time Series Analysis by Pretrained {LM}},
author={Tian Zhou and Peisong Niu and Xue Wang and Liang Sun and Rong Jin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gMS6FVZvmF}
}
|
Although we have witnessed great success of pre-trained models in natural language processing (NLP) and computer vision (CV), limited progress has been made for general time series analysis. Unlike NLP and CV where a unified model can be used to perform different tasks, specially designed approach still dominates in each time series analysis task such as classification, anomaly detection, forecasting, and few-shot learning. The main challenge that blocks the development of pre-trained model for time series analysis is the lack of a large amount of data for training. In this work, we address this challenge by leveraging language or CV models, pre-trained from billions of tokens, for time series analysis. Specifically, we refrain from altering the self-attention and feedforward layers of the residual blocks in the pre-trained language or image model. This model, known as the Frozen Pretrained Transformer (FPT), is evaluated through fine-tuning on all major types of tasks involving time series. Our results demonstrate that pre-trained models on natural language or images can lead to a comparable or state-of-the-art performance in all main time series analysis tasks, as illustrated in Figure1. We also found both theoretically and empirically that the self-attention module behaviors similarly to principle component analysis (PCA), an observation that helps explains how transformer bridges the domain gap and a crucial step towards understanding the universality of a pre-trained transformer.
The code is publicly available at https://anonymous.4open.science/r/Pretrained-LM-for-TSForcasting-C561.
|
One Fits All: Power General Time Series Analysis by Pretrained LM
|
[
"Tian Zhou",
"Peisong Niu",
"Xue Wang",
"Liang Sun",
"Rong Jin"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gLwjBDsE3G
|
@inproceedings{
zhao2023triangulation,
title={Triangulation Residual Loss for Data-efficient 3D Pose Estimation},
author={Jiachen Zhao and Tao Yu and Liang An and Yipeng Huang and Fang Deng and Qionghai Dai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gLwjBDsE3G}
}
|
This paper presents Triangulation Residual loss (TR loss) for multiview 3D pose estimation in a data-efficient manner. Existing 3D supervised models usually require large-scale 3D annotated datasets, but the amount of existing data is still insufficient to train supervised models to achieve ideal performance, especially for animal pose estimation. To employ unlabeled multiview data for training, previous epipolar-based consistency provides a self-supervised loss that considers only the local consistency in pairwise views, resulting in limited performance and heavy calculations. In contrast, TR loss enables self-supervision with global multiview geometric consistency. Starting from initial 2D keypoint estimates, the TR loss can fine-tune the corresponding 2D detector without 3D supervision by simply minimizing the smallest singular value of the triangulation matrix in an end-to-end fashion. Our method achieves the state-of-the-art 25.8mm MPJPE and competitive 28.7mm MPJPE with only 5\% 2D labeled training data on the Human3.6M dataset. Experiments on animals such as mice demonstrate our TR loss's data-efficient training ability.
|
Triangulation Residual Loss for Data-efficient 3D Pose Estimation
|
[
"Jiachen Zhao",
"Tao Yu",
"Liang An",
"Yipeng Huang",
"Fang Deng",
"Qionghai Dai"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=gLfgyIWiWW
|
@inproceedings{
bykov2023labeling,
title={Labeling Neural Representations with Inverse Recognition},
author={Kirill Bykov and Laura Kopf and Shinichi Nakajima and Marius Kloft and Marina MC H{\"o}hne},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gLfgyIWiWW}
}
|
Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown. Existing global explainability methods, such as Network Dissection, face limitations such as reliance on segmentation masks, lack of statistical significance testing, and high computational demands. We propose Inverse Recognition (INVERT), a scalable approach for connecting learned representations with human-understandable concepts by leveraging their capacity to discriminate between these concepts. In contrast to prior work, INVERT is capable of handling diverse types of neurons, exhibits less computational complexity, and does not rely on the availability of segmentation masks. Moreover, INVERT provides an interpretable metric assessing the alignment between the representation and its corresponding explanation and delivering a measure of statistical significance. We demonstrate the applicability of INVERT in various scenarios, including the identification of representations affected by spurious correlations, and the interpretation of the hierarchical structure of decision-making within the models.
|
Labeling Neural Representations with Inverse Recognition
|
[
"Kirill Bykov",
"Laura Kopf",
"Shinichi Nakajima",
"Marius Kloft",
"Marina MC Höhne"
] |
Conference
|
poster
|
2311.13594
|
[
"https://github.com/lapalap/invert"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gJewjFjfN2
|
@inproceedings{
xiao2023fedgrab,
title={Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient Balancer},
author={Zikai Xiao and Zihan Chen and Songshang Liu and Hualiang Wang and YANG FENG and Jin Hao and Joey Tianyi Zhou and Jian Wu and Howard Hao Yang and Zuozhu Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gJewjFjfN2}
}
|
Data privacy and long-tailed distribution are the norms rather than the exception in many real-world tasks. This paper investigates a federated long-tailed learning (Fed-LT) task in which each client holds a locally heterogeneous dataset; if the datasets can be globally aggregated, they jointly exhibit a long-tailed distribution. Under such a setting, existing federated optimization and/or centralized long-tailed learning methods hardly apply due to challenges in (a) characterizing the global long-tailed distribution under privacy constraints and (b) adjusting the local learning strategy to cope with the head-tail imbalance. In response, we propose a method termed $\texttt{Fed-GraB}$, comprised of a Self-adjusting Gradient Balancer (SGB) module that re-weights clients' gradients in a closed-loop manner, based on the feedback of global long-tailed distribution evaluated by a Direct Prior Analyzer (DPA) module. Using $\texttt{Fed-GraB}$, clients can effectively alleviate the distribution drift caused by data heterogeneity during the model training process and obtain a global model with better performance on the minority classes while maintaining the performance of the majority classes. Extensive experiments demonstrate that $\texttt{Fed-GraB}$ achieves state-of-the-art performance on representative datasets such as CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iNaturalist.
|
Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient Balancer
|
[
"Zikai Xiao",
"Zihan Chen",
"Songshang Liu",
"Hualiang Wang",
"YANG FENG",
"Jin Hao",
"Joey Tianyi Zhou",
"Jian Wu",
"Howard Hao Yang",
"Zuozhu Liu"
] |
Conference
|
poster
|
2310.07587
|
[
"https://github.com/zackzikaixiao/fedgrab"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gJLAfO4KUq
|
@inproceedings{
deshmukh2023pengi,
title={Pengi: An Audio Language Model for Audio Tasks},
author={Soham Deshmukh and Benjamin Elizalde and Rita Singh and Huaming Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gJLAfO4KUq}
}
|
In the domain of audio processing, Transfer Learning has facilitated the rise of Self-Supervised Learning and Zero-Shot Learning techniques. These approaches have led to the development of versatile models capable of tackling a wide array of tasks, while delivering state-of-the-art performance. However, current models inherently lack the capacity to produce the requisite language for open-ended tasks, such as Audio Captioning or Audio Question Answering. We introduce Pengi, a novel Audio Language Model that leverages Transfer Learning by framing all audio tasks as text-generation tasks. It takes as input, an audio recording, and text, and generates free-form text as output. The input audio is represented as a sequence of continuous embeddings by an audio encoder. A text encoder does the same for the corresponding text input. Both sequences are combined as a prefix to prompt a pre-trained frozen language model. The unified architecture of Pengi enables open-ended tasks and close-ended tasks without any additional fine-tuning or task-specific extensions. When evaluated on 21 downstream tasks, our approach yields state-of-the-art performance in several of them. Our results show that connecting language models with audio models is a major step towards general-purpose audio understanding.
|
Pengi: An Audio Language Model for Audio Tasks
|
[
"Soham Deshmukh",
"Benjamin Elizalde",
"Rita Singh",
"Huaming Wang"
] |
Conference
|
poster
|
2305.11834
|
[
"https://github.com/microsoft/pengi"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gJHAT79cZU
|
@inproceedings{
uy2023nerf,
title={Ne{RF} Revisited: Fixing Quadrature Instability in Volume Rendering},
author={Mikaela Angelina Uy and Kiyohiro Nakayama and Guandao Yang and Rahul Krishna Thomas and Leonidas Guibas and Ke Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gJHAT79cZU}
}
|
Neural radiance fields (NeRF) rely on volume rendering to synthesize novel views. Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum that corresponds to the exact integral along the ray under piecewise constant volume density. As a consequence, the rendered result is unstable w.r.t. the choice of samples along the ray, a phenomenon that we dub quadrature instability. We propose a mathematically principled solution by reformulating the sample-based rendering equation so that it corresponds to the exact integral under piecewise linear volume density. This simultaneously resolves multiple issues: conflicts between samples along different rays, imprecise hierarchical sampling, and non-differentiability of quantiles of ray termination distances w.r.t. model parameters. We demonstrate several benefits over the classical sample-based rendering equation, such as sharper textures, better geometric reconstruction, and stronger depth supervision. Our proposed formulation can be also be used as a drop-in replacement to the volume rendering equation of existing NeRF-based methods. Our project page can be found at pl-nerf.github.io.
|
NeRF Revisited: Fixing Quadrature Instability in Volume Rendering
|
[
"Mikaela Angelina Uy",
"Kiyohiro Nakayama",
"Guandao Yang",
"Rahul Krishna Thomas",
"Leonidas Guibas",
"Ke Li"
] |
Conference
|
poster
|
2310.20685
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gIG8LvTLuc
|
@inproceedings{
jiang2023how,
title={How Does Adaptive Optimization Impact Local Neural Network Geometry?},
author={Kaiqi Jiang and Dhruv Malik and Yuanzhi Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gIG8LvTLuc}
}
|
Adaptive optimization methods are well known to achieve superior convergence relative to vanilla gradient methods. The traditional viewpoint in optimization, particularly in convex optimization, explains this improved performance by arguing that, unlike vanilla gradient schemes, adaptive algorithms mimic the behavior of a second-order method by adapting to the *global* geometry of the loss function. We argue that in the context of neural network optimization, this traditional viewpoint is insufficient. Instead, we advocate for a *local* trajectory analysis. For iterate trajectories produced by running a generic optimization algorithm OPT, we introduce $R^{\text{OPT}}\_{\text{med}}$, a statistic that is analogous to the condition number of the loss Hessian evaluated at the iterates. Through extensive experiments on language models where adaptive algorithms converge faster than vanilla gradient methods like SGD, we show that adaptive methods such as Adam bias the trajectories towards regions where $R^{\text{Adam}}_{\text{med}}$ is small, where one might expect faster optimization. By contrast, SGD (with momentum) biases the trajectories towards regions where $R^{\text{SGD}}\_{\text{med}}$ is comparatively large. We complement these empirical observations with a theoretical result that provably demonstrates this phenomenon in the simplified setting of a two-layer linear network. We view our findings as evidence for the need of a new explanation of the success of adaptive methods, one that is different than the conventional wisdom.
|
How Does Adaptive Optimization Impact Local Neural Network Geometry?
|
[
"Kaiqi Jiang",
"Dhruv Malik",
"Yuanzhi Li"
] |
Conference
|
poster
|
2211.02254
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gI1SOgW3kw
|
@inproceedings{
zheng2023generalizing,
title={Generalizing Nonlinear {ICA} Beyond Structural Sparsity},
author={Yujia Zheng and Kun Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gI1SOgW3kw}
}
|
Nonlinear independent component analysis (ICA) aims to uncover the true latent sources from their observable nonlinear mixtures. Despite its significance, the identifiability of nonlinear ICA is known to be impossible without additional assumptions. Recent advances have proposed conditions on the connective structure from sources to observed variables, known as Structural Sparsity, to achieve identifiability in an unsupervised manner. However, the sparsity constraint may not hold universally for all sources in practice. Furthermore, the assumptions of bijectivity of the mixing process and independence among all sources, which arise from the setting of ICA, may also be violated in many real-world scenarios. To address these limitations and generalize nonlinear ICA, we propose a set of new identifiability results in the general settings of undercompleteness, partial sparsity and source dependence, and flexible grouping structures. Specifically, we prove identifiability when there are more observed variables than sources (undercomplete), and when certain sparsity and/or source independence assumptions are not met for some changing sources. Moreover, we show that even in cases with flexible grouping structures (e.g., part of the sources can be divided into irreducible independent groups with various sizes), appropriate identifiability results can also be established. Theoretical claims are supported empirically on both synthetic and real-world datasets.
|
Generalizing Nonlinear ICA Beyond Structural Sparsity
|
[
"Yujia Zheng",
"Kun Zhang"
] |
Conference
|
oral
|
2311.00866
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=gGl0n7Onug
|
@inproceedings{
schioppa2023theoretical,
title={Theoretical and Practical Perspectives on what Influence Functions Do},
author={Andrea Schioppa and Katja Filippova and Ivan Titov and Polina Zablotskaia},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gGl0n7Onug}
}
|
Influence functions (IF) have been seen as a technique for explaining model predictions through the lens of the training data. Their utility is assumed to be in identifying training examples "responsible" for a prediction so that, for example, correcting a prediction is possible by intervening on those examples (removing or editing them) and retraining the model. However, recent empirical studies have shown that the existing methods of estimating IF predict the leave-one-out-and-retrain effect poorly.
In order to understand the mismatch between the theoretical promise and the practical results, we analyse five assumptions made by IF methods which are problematic for modern-scale deep neural networks and which concern convexity, numeric stability, training trajectory and parameter divergence. This allows us to clarify what can be expected theoretically from IF. We show that while most assumptions can be addressed successfully, the parameter divergence poses a clear limitation on the predictive power of IF: influence fades over training time even with deterministic training. We illustrate this theoretical result with BERT and ResNet models.
Another conclusion from the theoretical analysis is that IF are still useful for model debugging and correcting even though some of the assumptions made in prior work do not hold: using natural language processing and computer vision tasks, we verify that mis-predictions can be successfully corrected by taking only a few fine-tuning steps on influential examples.
|
Theoretical and Practical Perspectives on what Influence Functions Do
|
[
"Andrea Schioppa",
"Katja Filippova",
"Ivan Titov",
"Polina Zablotskaia"
] |
Conference
|
spotlight
|
2305.16971
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.