title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
|
https://papers.nips.cc/paper_files/paper/2021/hash/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html
|
Jiawei Zhang, Yushun Zhang, Mingyi Hong, Ruoyu Sun, Zhi-Quan Luo
|
https://papers.nips.cc/paper_files/paper/2021/hash/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12324-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4c7a167bb329bd92580a99ce422d6fa6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=K_MD-PMTLtA
|
https://papers.nips.cc/paper_files/paper/2021/file/4c7a167bb329bd92580a99ce422d6fa6-Supplemental.pdf
|
Modern neural networks are often quite wide, causing large memory and computation costs. It is thus of great interest to train a narrower network. However, training narrow neural nets remains a challenging task. We ask two theoretical questions: Can narrow networks have as strong expressivity as wide ones? If so, does the loss function exhibit a benign optimization landscape? In this work, we provide partially affirmative answers to both questions for 1-hidden-layer networks with fewer than $n$ (sample size) neurons when the activation is smooth. First, we prove that as long as the width $m \geq 2n/d$ (where $d$ is the input dimension), its expressivity is strong, i.e., there exists at least one global minimizer with zero training loss. Second, we identify a nice local region with no local-min or saddle points. Nevertheless, it is not clear whether gradient descent can stay in this nice region. Third, we consider a constrained optimization formulation where the feasible region is the nice local region, and prove that every KKT point is a nearly global minimizer. It is expected that projected gradient methods converge to KKT points under mild technical conditions, but we leave the rigorous convergence analysis to future work. Thorough numerical results show that projected gradient methods on this constrained formulation significantly outperform SGD for training narrow neural nets.
| null |
Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ca82782c5372a547c104929f03fe7a9-Abstract.html
|
Souvik Kundu, Qirui Sun, Yao Fu, Massoud Pedram, Peter Beerel
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ca82782c5372a547c104929f03fe7a9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12325-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4ca82782c5372a547c104929f03fe7a9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Laz0L5tjml
|
https://papers.nips.cc/paper_files/paper/2021/file/4ca82782c5372a547c104929f03fe7a9-Supplemental.pdf
|
Knowledge distillation (KD) has recently been identified as a method that can unintentionally leak private information regarding the details of a teacher model to an unauthorized student. Recent research in developing undistillable nasty teachers that can protect model confidentiality has gained significant attention. However, the level of protection these nasty models offer has been largely untested. In this paper, we show that transferring knowledge to a shallow sub-section of a student can largely reduce a teacher’s influence. By exploring the depth of the shallow subsection, we then present a distillation technique that enables a skeptical student model to learn even from a nasty teacher. To evaluate the efficacy of our skeptical students, we conducted experiments with several models with KD on both training data-available and data-free scenarios for various datasets. While distilling from nasty teachers, compared to the normal student models, skeptical students consistently provide superior classification performance of up to ∼59.5%. Moreover, similar to normal students, skeptical students maintain high classification accuracy when distilled from a normal teacher, showing their efficacy irrespective of the teacher being nasty or not. We believe the ability of skeptical students to largely diminish the KD-immunity of potentially nasty teachers will motivate the research community to create more robust mechanisms for model confidentiality. We have open-sourced the code at https://github.com/ksouvik52/Skeptical2021
| null |
High Probability Complexity Bounds for Line Search Based on Stochastic Oracles
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cb811134b9d39fc3104bd06ce75abad-Abstract.html
|
Billy Jin, Katya Scheinberg, Miaolan Xie
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cb811134b9d39fc3104bd06ce75abad-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12326-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4cb811134b9d39fc3104bd06ce75abad-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RSdCxeaOF1
|
https://papers.nips.cc/paper_files/paper/2021/file/4cb811134b9d39fc3104bd06ce75abad-Supplemental.pdf
|
We consider a line-search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles. These oracles capture multiple standard settings including expected loss minimization and zeroth-order optimization. Moreover, our framework is very general and allows the function and gradient estimates to be biased. The proposed algorithm is simple to describe, easy to implement, and uses these oracles in a similar way as the standard deterministic line search uses exact function and gradient values. Under fairly general conditions on the oracles, we derive a high probability tail bound on the iteration complexity of the algorithm when applied to non-convex smooth functions. These results are stronger than those for other existing stochastic line search methods and apply in more general settings.
| null |
Pay Attention to MLPs
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cc05b35c2f937c5bd9e7d41d3686fff-Abstract.html
|
Hanxiao Liu, Zihang Dai, David So, Quoc V Le
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cc05b35c2f937c5bd9e7d41d3686fff-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12327-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4cc05b35c2f937c5bd9e7d41d3686fff-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=KBnXrODoBW
|
https://papers.nips.cc/paper_files/paper/2021/file/4cc05b35c2f937c5bd9e7d41d3686fff-Supplemental.pdf
|
Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple network architecture, gMLP, based solely on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream NLP tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.
| null |
An Image is Worth More Than a Thousand Words: Towards Disentanglement in The Wild
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cef5b5e6ff1b3445db4c013f1d452e0-Abstract.html
|
Aviv Gabbay, Niv Cohen, Yedid Hoshen
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cef5b5e6ff1b3445db4c013f1d452e0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12328-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4cef5b5e6ff1b3445db4c013f1d452e0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cY8bNhXEB1
|
https://papers.nips.cc/paper_files/paper/2021/file/4cef5b5e6ff1b3445db4c013f1d452e0-Supplemental.pdf
|
Unsupervised disentanglement has been shown to be theoretically impossible without inductive biases on the models and the data. As an alternative approach, recent methods rely on limited supervision to disentangle the factors of variation and allow their identifiability. While annotating the true generative factors is only required for a limited number of observations, we argue that it is infeasible to enumerate all the factors of variation that describe a real-world image distribution. To this end, we propose a method for disentangling a set of factors which are only partially labeled, as well as separating the complementary set of residual factors that are never explicitly specified. Our success in this challenging setting, demonstrated on synthetic benchmarks, gives rise to leveraging off-the-shelf image descriptors to partially annotate a subset of attributes in real image domains (e.g. of human faces) with minimal manual effort. Specifically, we use a recent language-image embedding model (CLIP) to annotate a set of attributes of interest in a zero-shot manner and demonstrate state-of-the-art disentangled image manipulation results.
| null |
Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cf0ed8641cfcbbf46784e620a0316fb-Abstract.html
|
Courtney Paquette, Elliot Paquette
|
https://papers.nips.cc/paper_files/paper/2021/hash/4cf0ed8641cfcbbf46784e620a0316fb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12329-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4cf0ed8641cfcbbf46784e620a0316fb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WSykyaty6Q
|
https://papers.nips.cc/paper_files/paper/2021/file/4cf0ed8641cfcbbf46784e620a0316fb-Supplemental.pdf
|
We analyze a class of stochastic gradient algorithms with momentum on a high-dimensional random least squares problem. Our framework, inspired by random matrix theory, provides an exact (deterministic) characterization for the sequence of function values produced by these algorithms which is expressed only in terms of the eigenvalues of the Hessian. This leads to simple expressions for nearly-optimal hyperparameters, a description of the limiting neighborhood, and average-case complexity. As a consequence, we show that (small-batch) stochastic heavy-ball momentum with a fixed momentum parameter provides no actual performance improvement over SGD when step sizes are adjusted correctly. For contrast, in the non-strongly convex setting, it is possible to get a large improvement over SGD using momentum. By introducing hyperparameters that depend on the number of samples, we propose a new algorithm sDANA (stochastic dimension adjusted Nesterov acceleration) which obtains an asymptotically optimal average-case complexity while remaining linearly convergent in the strongly convex setting without adjusting parameters.
| null |
Adversarial Examples in Multi-Layer Random ReLU Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d19b37a2c399deace9082d464930022-Abstract.html
|
Peter Bartlett, Sebastien Bubeck, Yeshwanth Cherapanamjeri
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d19b37a2c399deace9082d464930022-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12330-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4d19b37a2c399deace9082d464930022-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BKpNqR19JgD
|
https://papers.nips.cc/paper_files/paper/2021/file/4d19b37a2c399deace9082d464930022-Supplemental.pdf
|
We consider the phenomenon of adversarial examples in ReLU networks with independent Gaussian parameters. For networks of constant depth and with a large range of widths (for instance, it suffices if the width of each layer is polynomial in that of any other layer), small perturbations of input vectors lead to large changes of outputs. This generalizes results of Daniely and Schacham (2020) for networks of rapidly decreasing width and of Bubeck et al (2021) for two-layer networks. Our proof shows that adversarial examples arise in these networks because the functions they compute are \emph{locally} very similar to random linear functions. Bottleneck layers play a key role: the minimal width up to some point in the network determines scales and sensitivities of mappings computed up to that point. The main result is for networks with constant depth, but we also show that some constraint on depth is necessary for a result of this kind, because there are suitably deep networks that, with constant probability, compute a function that is close to constant.
| null |
Efficient Statistical Assessment of Neural Network Corruption Robustness
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d215ab7508a3e089af43fb605dd27d1-Abstract.html
|
Karim TIT, Teddy Furon, Mathias ROUSSET
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d215ab7508a3e089af43fb605dd27d1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12331-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4d215ab7508a3e089af43fb605dd27d1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=IBHP61avv0R
|
https://papers.nips.cc/paper_files/paper/2021/file/4d215ab7508a3e089af43fb605dd27d1-Supplemental.pdf
|
We quantify the robustness of a trained network to input uncertainties with a stochastic simulation inspired by the field of Statistical Reliability Engineering. The robustness assessment is cast as a statistical hypothesis test: the network is deemed as locally robust if the estimated probability of failure is lower than a critical level.The procedure is based on an Importance Splitting simulation generating samples of rare events. We derive theoretical guarantees that are non-asymptotic w.r.t. sample size. Experiments tackling large scale networks outline the efficiency of our method making a low number of calls to the network function.
| null |
A Highly-Efficient Group Elastic Net Algorithm with an Application to Function-On-Scalar Regression
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d410063822cd9be28f86701c0bc3a31-Abstract.html
|
Tobia Boschi, Matthew Reimherr, Francesca Chiaromonte
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d410063822cd9be28f86701c0bc3a31-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12332-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4d410063822cd9be28f86701c0bc3a31-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=KrAVI2AhNJh
|
https://papers.nips.cc/paper_files/paper/2021/file/4d410063822cd9be28f86701c0bc3a31-Supplemental.pdf
|
Feature Selection and Functional Data Analysis are two dynamic areas of research, with important applications in the analysis of large and complex data sets. Straddling these two areas, we propose a new highly efficient algorithm to perform Group Elastic Net with application to function-on-scalar feature selection, where a functional response is modeled against a very large number of potential scalar predictors. First, we introduce a new algorithm to solve Group Elastic Net in ultra-high dimensional settings, which exploits the sparsity structure of the Augmented Lagrangian to greatly reduce the computational burden. Next, taking advantage of the properties of Functional Principal Components, we extend our algorithm to the function-on-scalar regression framework. We use simulations to demonstrate the CPU time gains afforded by our approach compared to its best existing competitors, and present an application to data from a Genome-Wide Association Study on childhood obesity.
| null |
Hierarchical Clustering: $O(1)$-Approximation for Well-Clustered Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d68e143defa221fead61c84de7527a3-Abstract.html
|
Bogdan-Adrian Manghiuc, He Sun
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d68e143defa221fead61c84de7527a3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12333-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4d68e143defa221fead61c84de7527a3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zjJyjQj1W7U
| null |
Hierarchical clustering studies a recursive partition of a data set into clusters of successively smaller size, and is a fundamental problem in data analysis. In this work we study the cost function for hierarchical clustering introduced by Dasgupta, and present two polynomial-time approximation algorithms: Our first result is an $O(1)$-approximation algorithm for graphs of high conductance. Our simple construction bypasses complicated recursive routines of finding sparse cuts known in the literature. Our second and main result is an $O(1)$-approximation algorithm for a wide family of graphs that exhibit a well-defined structure of clusters. This result generalises the previous state-of-the-art, which holds only for graphs generated from stochastic models. The significance of our work is demonstrated by the empirical analysis on both synthetic and real-world data sets, on which our presented algorithm outperforms the previously proposed algorithm for graphs with a well-defined cluster structure.
| null |
Realistic evaluation of transductive few-shot learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d7a968bb636e25818ff2a3941db08c1-Abstract.html
|
Olivier Veilleux, Malik Boudiaf, Pablo Piantanida, Ismail Ben Ayed
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d7a968bb636e25818ff2a3941db08c1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12334-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4d7a968bb636e25818ff2a3941db08c1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6ns5QTPQ_d
|
https://papers.nips.cc/paper_files/paper/2021/file/4d7a968bb636e25818ff2a3941db08c1-Supplemental.pdf
|
Transductive inference is widely used in few-shot learning, as it leverages the statistics of the unlabeled query set of a few-shot task, typically yielding substantially better performances than its inductive counterpart. The current few-shot benchmarks use perfectly class-balanced tasks at inference. We argue that such an artificial regularity is unrealistic, as it assumes that the marginal label probability of the testing samples is known and fixed to the uniform distribution. In fact, in realistic scenarios, the unlabeled query sets come with arbitrary and unknown label marginals. We introduce and study the effect of arbitrary class distributions within the query sets of few-shot tasks at inference, removing the class-balance artefact. Specifically, we model the marginal probabilities of the classes as Dirichlet-distributed random variables, which yields a principled and realistic sampling within the simplex. This leverages the current few-shot benchmarks, building testing tasks with arbitrary class distributions. We evaluate experimentally state-of-the-art transductive methods over 3 widely used data sets, and observe, surprisingly, substantial performance drops, even below inductive methods in some cases. Furthermore, we propose a generalization of the mutual-information loss, based on α-divergences, which can handle effectively class-distribution variations. Empirically, we show that our transductive α-divergence optimization outperforms state-of-the-art methods across several data sets, models and few-shot settings.
| null |
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d8bd3f7351f4fee76ba17594f070ddd-Abstract.html
|
Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yigitcan Kaya, Tudor Dumitras
|
https://papers.nips.cc/paper_files/paper/2021/hash/4d8bd3f7351f4fee76ba17594f070ddd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12335-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4d8bd3f7351f4fee76ba17594f070ddd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0kCxbBQknN
|
https://papers.nips.cc/paper_files/paper/2021/file/4d8bd3f7351f4fee76ba17594f070ddd-Supplemental.pdf
|
Quantization is a popular technique that transforms the parameter representation of a neural network from floating-point numbers into lower-precision ones (e.g., 8-bit integers). It reduces the memory footprint and the computational cost at inference, facilitating the deployment of resource-hungry models. However, the parameter perturbations caused by this transformation result in behavioral disparities between the model before and after quantization. For example, a quantized model can misclassify some test-time samples that are otherwise classified correctly. It is not known whether such differences lead to a new security vulnerability. We hypothesize that an adversary may control this disparity to introduce specific behaviors that activate upon quantization. To study this hypothesis, we weaponize quantization-aware training and propose a new training framework to implement adversarial quantization outcomes. Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger. We further show that a single compromised model defeats multiple quantization schemes, including robust quantization techniques. Moreover, in a federated learning scenario, we demonstrate that a set of malicious participants who conspire can inject our quantization-activated backdoor. Lastly, we discuss potential counter-measures and show that only re-training consistently removes the attack artifacts. Our code is available at https://github.com/Secure-AI-Systems-Group/Qu-ANTI-zation
| null |
Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ddb5b8d603f88e9de689f3230234b47-Abstract.html
|
Raef Bassily, Cristóbal Guzmán, Michael Menart
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ddb5b8d603f88e9de689f3230234b47-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12336-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4ddb5b8d603f88e9de689f3230234b47-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ra-2OvXr7UU
|
https://papers.nips.cc/paper_files/paper/2021/file/4ddb5b8d603f88e9de689f3230234b47-Supplemental.pdf
|
We study differentially private stochastic optimization in convex and non-convex settings. For the convex case, we focus on the family of non-smooth generalized linear losses (GLLs). Our algorithm for the $\ell_2$ setting achieves optimal excess population risk in near-linear time, while the best known differentially private algorithms for general convex losses run in super-linear time. Our algorithm for the $\ell_1$ setting has nearly-optimal excess population risk $\tilde{O}\big(\sqrt{\frac{\log{d}}{n}}\big)$, and circumvents the dimension dependent lower bound of \cite{Asi:2021} for general non-smooth convex losses. In the differentially private non-convex setting, we provide several new algorithms for approximating stationary points of the population risk. For the $\ell_1$-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, $\tilde O\big(\frac{\log^{2/3}{d}}{{n^{1/3}}}\big)$ in linear time. For the constrained $\ell_2$-case, with smooth losses, we obtain a linear-time algorithm with rate $\tilde O\big(\frac{1}{n^{3/10}d^{1/10}}+\big(\frac{d}{n^2}\big)^{1/5}\big)$. Finally, for the $\ell_2$-case we provide the first method for {\em non-smooth weakly convex} stochastic optimization with rate $\tilde O\big(\frac{1}{n^{1/4}}+\big(\frac{d}{n^2}\big)^{1/6}\big)$ which matches the best existing non-private algorithm when $d= O(\sqrt{n})$. We also extend all our results above for the non-convex $\ell_2$ setting to the $\ell_p$ setting, where $1 < p \leq 2$, with only polylogarithmic (in the dimension) overhead in the rates.
| null |
TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/4dea382d82666332fb564f2e711cbc71-Abstract.html
|
Minchao Wu, Michael Norrish, Christian Walder, Amir Dezfouli
|
https://papers.nips.cc/paper_files/paper/2021/hash/4dea382d82666332fb564f2e711cbc71-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12337-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4dea382d82666332fb564f2e711cbc71-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=edmYVRkYZv
|
https://papers.nips.cc/paper_files/paper/2021/file/4dea382d82666332fb564f2e711cbc71-Supplemental.zip
|
We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning. The proposed framework is able to learn proof search strategies as well as tactic and arguments prediction in an end-to-end manner. We formulate the process of ITP as a Markov decision process (MDP) in which each state represents a set of potential derivation paths. This structure allows us to introduce a novel backtracking mechanism which enables the agent to efficiently discard (predicted) dead-end derivations and restart the derivation from promising alternatives. We implement the framework in the HOL theorem prover. Experimental results show that the framework using learned search strategies outperforms existing automated theorem provers (i.e., hammers) available in HOL when evaluated on unseen problems. We further elaborate the role of key components of the framework using ablation studies.
| null |
Integrating Tree Path in Transformer for Code Representation
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0223a87610176ef0d24ef6d2dcde3a-Abstract.html
|
Han Peng, Ge Li, Wenhan Wang, YunFei Zhao, Zhi Jin
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0223a87610176ef0d24ef6d2dcde3a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12338-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e0223a87610176ef0d24ef6d2dcde3a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=70Q_NeHImB3
| null |
Learning distributed representation of source code requires modelling its syntax and semantics. Recent state-of-the-art models leverage highly structured source code representations, such as the syntax trees and paths therein. In this paper, we investigate two representative path encoding methods shown in previous research work and integrate them into the attention module of Transformer. We draw inspiration from the ideas of positional encoding and modify them to incorporate these path encoding. Specifically, we encode both the pairwise path between tokens of source code and the path from the leaf node to the tree root for each token in the syntax tree. We explore the interaction between these two kinds of paths by integrating them into the unified Transformer framework. The detailed empirical study for path encoding methods also leads to our novel state-of-the-art representation model TPTrans, which finally outperforms strong baselines. Extensive experiments and ablation studies on code summarization across four different languages demonstrate the effectiveness of our approaches. We release our code at \url{https://github.com/AwdHanPeng/TPTrans}.
| null |
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0928de075538c593fbdabb0c5ef2c3-Abstract.html
|
Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, Chunhua Shen
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0928de075538c593fbdabb0c5ef2c3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12339-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e0928de075538c593fbdabb0c5ef2c3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5kTlVBkzSRx
|
https://papers.nips.cc/paper_files/paper/2021/file/4e0928de075538c593fbdabb0c5ef2c3-Supplemental.pdf
|
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully devised yet simple spatial attention mechanism performs favorably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins- PCPVT and Twins-SVT. Our proposed architectures are highly efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image-level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks.
| null |
Evaluating State-of-the-Art Classification Models Against Bayes Optimality
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0ccd2b894f717df5ebc12f4282ee70-Abstract.html
|
Ryan Theisen, Huan Wang, Lav R. Varshney, Caiming Xiong, Richard Socher
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0ccd2b894f717df5ebc12f4282ee70-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12340-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e0ccd2b894f717df5ebc12f4282ee70-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=K9WlOVPEpnM
|
https://papers.nips.cc/paper_files/paper/2021/file/4e0ccd2b894f717df5ebc12f4282ee70-Supplemental.pdf
|
Evaluating the inherent difficulty of a given data-driven classification problem is important for establishing absolute benchmarks and evaluating progress in the field. To this end, a natural quantity to consider is the \emph{Bayes error}, which measures the optimal classification error theoretically achievable for a given data distribution. While generally an intractable quantity, we show that we can compute the exact Bayes error of generative models learned using normalizing flows. Our technique relies on a fundamental result, which states that the Bayes error is invariant under invertible transformation. Therefore, we can compute the exact Bayes error of the learned flow models by computing it for Gaussian base distributions, which can be done efficiently using Holmes-Diaconis-Ross integration. Moreover, we show that by varying the temperature of the learned flow models, we can generate synthetic datasets that closely resemble standard benchmark datasets, but with almost any desired Bayes error. We use our approach to conduct a thorough investigation of state-of-the-art classification models, and find that in some --- but not all --- cases, these models are capable of obtaining accuracy very near optimal. Finally, we use our method to evaluate the intrinsic "hardness" of standard benchmark datasets.
| null |
Data-Efficient Instance Generation from Instance Discrimination
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0d67e54ad6626e957d15b08ae128a6-Abstract.html
|
Ceyuan Yang, Yujun Shen, Yinghao Xu, Bolei Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e0d67e54ad6626e957d15b08ae128a6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12341-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e0d67e54ad6626e957d15b08ae128a6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9BpjtPMyDQ
|
https://papers.nips.cc/paper_files/paper/2021/file/4e0d67e54ad6626e957d15b08ae128a6-Supplemental.pdf
|
Generative Adversarial Networks (GANs) have significantly advanced image synthesis, however, the synthesis quality drops significantly given a limited amount of training data. To improve the data efficiency of GAN training, prior work typically employs data augmentation to mitigate the overfitting of the discriminator yet still learn the discriminator with a bi-classification ($\textit{i.e.}$, real $\textit{vs.}$ fake) task. In this work, we propose a data-efficient Instance Generation ($\textit{InsGen}$) method based on instance discrimination. Concretely, besides differentiating the real domain from the fake domain, the discriminator is required to distinguish every individual image, no matter it comes from the training set or from the generator. In this way, the discriminator can benefit from the infinite synthesized samples for training, alleviating the overfitting problem caused by insufficient training data. A noise perturbation strategy is further introduced to improve its discriminative power. Meanwhile, the learned instance discrimination capability from the discriminator is in turn exploited to encourage the generator for diverse generation. Extensive experiments demonstrate the effectiveness of our method on a variety of datasets and training settings. Noticeably, on the setting of $2K$ training images from the FFHQ dataset, we outperform the state-of-the-art approach with 23.5\% FID improvement.
| null |
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e246a381baf2ce038b3b0f82c7d6fb4-Abstract.html
|
Dylan Slack, Anna Hilgard, Sameer Singh, Himabindu Lakkaraju
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e246a381baf2ce038b3b0f82c7d6fb4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12342-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e246a381baf2ce038b3b0f82c7d6fb4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rqfq0CYIekd
|
https://papers.nips.cc/paper_files/paper/2021/file/4e246a381baf2ce038b3b0f82c7d6fb4-Supplemental.pdf
|
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, and provide very little insight into their correctness and reliability. In addition, these methods are also computationally inefficient, and require significant hyper-parameter tuning. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework.
| null |
Learning Graph Models for Retrosynthesis Prediction
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e2a6330465c8ffcaa696a5a16639176-Abstract.html
|
Vignesh Ram Somnath, Charlotte Bunne, Connor Coley, Andreas Krause, Regina Barzilay
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e2a6330465c8ffcaa696a5a16639176-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12343-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e2a6330465c8ffcaa696a5a16639176-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SnONpXZ_uQ_
| null |
Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule. A key consideration in building neural models for this task is aligning model design with strategies adopted by chemists. Building on this viewpoint, this paper introduces a graph-based approach that capitalizes on the idea that the graph topology of precursor molecules is largely unaltered during a chemical reaction. The model first predicts the set of graph edits transforming the target into incomplete molecules called synthons. Next, the model learns to expand synthons into complete molecules by attaching relevant leaving groups. This decomposition simplifies the architecture, making its predictions more interpretable, and also amenable to manual correction. Our model achieves a top-1 accuracy of 53.7%, outperforming previous template-free and semi-template-based methods.
| null |
Differentiable Equilibrium Computation with Decision Diagrams for Stackelberg Models of Combinatorial Congestion Games
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e4b5fbbbb602b6d35bea8460aa8f8e5-Abstract.html
|
Shinsaku Sakaue, Kengo Nakamura
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e4b5fbbbb602b6d35bea8460aa8f8e5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12344-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e4b5fbbbb602b6d35bea8460aa8f8e5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=umuW_b77q9A
|
https://papers.nips.cc/paper_files/paper/2021/file/4e4b5fbbbb602b6d35bea8460aa8f8e5-Supplemental.pdf
|
We address Stackelberg models of combinatorial congestion games (CCGs); we aim to optimize the parameters of CCGs so that the selfish behavior of non-atomic players attains desirable equilibria. This model is essential for designing such social infrastructures as traffic and communication networks. Nevertheless, computational approaches to the model have not been thoroughly studied due to two difficulties: (I) bilevel-programming structures and (II) the combinatorial nature of CCGs. We tackle them by carefully combining (I) the idea of \textit{differentiable} optimization and (II) data structures called \textit{zero-suppressed binary decision diagrams} (ZDDs), which can compactly represent sets of combinatorial strategies. Our algorithm numerically approximates the equilibria of CCGs, which we can differentiate with respect to parameters of CCGs by automatic differentiation. With the resulting derivatives, we can apply gradient-based methods to Stackelberg models of CCGs. Our method is tailored to induce Nesterov's acceleration and can fully utilize the empirical compactness of ZDDs. These technical advantages enable us to deal with CCGs with a vast number of combinatorial strategies. Experiments on real-world network design instances demonstrate the practicality of our method.
| null |
Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e55139e019a58e0084f194f758ffdea-Abstract.html
|
Matthias Schultheis, Dominik Straub, Constantin A. Rothkopf
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e55139e019a58e0084f194f758ffdea-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12345-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e55139e019a58e0084f194f758ffdea-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=x3RPoH3bCQ-
|
https://papers.nips.cc/paper_files/paper/2021/file/4e55139e019a58e0084f194f758ffdea-Supplemental.pdf
|
Computational level explanations based on optimal feedback control with signal-dependent noise have been able to account for a vast array of phenomena in human sensorimotor behavior. However, commonly a cost function needs to be assumed for a task and the optimality of human behavior is evaluated by comparing observed and predicted trajectories. Here, we introduce inverse optimal control with signal-dependent noise, which allows inferring the cost function from observed behavior. To do so, we formalize the problem as a partially observable Markov decision process and distinguish between the agent’s and the experimenter’s inference problems. Specifically, we derive a probabilistic formulation of the evolution of states and belief states and an approximation to the propagation equation in the linear-quadratic Gaussian problem with signal-dependent noise. We extend the model to the case of partial observability of state variables from the point of view of the experimenter. We show the feasibility of the approach through validation on synthetic data and application to experimental data. Our approach enables recovering the costs and benefits implicit in human sequential sensorimotor behavior, thereby reconciling normative and descriptive approaches in a computational framework.
| null |
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e6cd95227cb0c280e99a195be5f6615-Abstract.html
|
Vincent Dutordoir, James Hensman, Mark van der Wilk, Carl Henrik Ek, Zoubin Ghahramani, Nicolas Durrande
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e6cd95227cb0c280e99a195be5f6615-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12346-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e6cd95227cb0c280e99a195be5f6615-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=svlanLvYsTd
|
https://papers.nips.cc/paper_files/paper/2021/file/4e6cd95227cb0c280e99a195be5f6615-Supplemental.pdf
|
Neural networks and Gaussian processes are complementary in their strengths and weaknesses. Having a better understanding of their relationship comes with the promise to make each method benefit from the strengths of the other. In this work, we establish an equivalence between the forward passes of neural networks and (deep) sparse Gaussian process models. The theory we develop is based on interpreting activation functions as interdomain inducing features through a rigorous analysis of the interplay between activation functions and kernels. This results in models that can either be seen as neural networks with improved uncertainty prediction or deep Gaussian processes with increased prediction accuracy. These claims are supported by experimental results on regression and classification datasets.
| null |
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e8eaf897c638d519710b1691121f8cb-Abstract.html
|
Alessandro Favero, Francesco Cagnetta, Matthieu Wyart
|
https://papers.nips.cc/paper_files/paper/2021/hash/4e8eaf897c638d519710b1691121f8cb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12347-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4e8eaf897c638d519710b1691121f8cb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sBBnfOFtPc
|
https://papers.nips.cc/paper_files/paper/2021/file/4e8eaf897c638d519710b1691121f8cb-Supplemental.pdf
|
Convolutional neural networks perform a local and translationally-invariant treatment of the data: quantifying which of these two aspects is central to their success remains a challenge. We study this problem within a teacher-student framework for kernel regression, using 'convolutional' kernels inspired by the neural tangent kernel of simple convolutional architectures of given filter size. Using heuristic methods from physics, we find in the ridgeless case that locality is key in determining the learning curve exponent $\beta$ (that relates the test error $\epsilon_t\sim P^{-\beta}$ to the size of the training set $P$), whereas translational invariance is not. In particular, if the filter size of the teacher $t$ is smaller than that of the student $s$, $\beta$ is a function of $s$ only and does not depend on the input dimension. We confirm our predictions on $\beta$ empirically. We conclude by proving, under a natural universality assumption, that performing kernel regression with a ridge that decreases with the size of the training set leads to similar learning curve exponents to those we obtain in the ridgeless case.
| null |
Causal Identification with Matrix Equations
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ea06fbc83cdd0a06020c35d50e1e89a-Abstract.html
|
Sanghack Lee, Elias Bareinboim
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ea06fbc83cdd0a06020c35d50e1e89a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12348-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4ea06fbc83cdd0a06020c35d50e1e89a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wfJCeMS-jH
| null |
Causal effect identification is concerned with determining whether a causal effect is computable from a combination of qualitative assumptions about the underlying system (e.g., a causal graph) and distributions collected from this system. Many identification algorithms exclusively rely on graphical criteria made of a non-trivial combination of probability axioms, do-calculus, and refined c-factorization (e.g., Lee & Bareinboim, 2020). In a sequence of increasingly sophisticated results, it has been shown how proxy variables can be used to identify certain effects that would not be otherwise recoverable in challenging scenarios through solving matrix equations (e.g., Kuroki & Pearl, 2014; Miao et al., 2018). In this paper, we develop a new causal identification algorithm which utilizes both graphical criteria and matrix equations. Specifically, we first characterize the relationships between certain graphically-driven formulae and matrix multiplications. With such characterizations, we broaden the spectrum of proxy variable based identification conditions and further propose novel intermediary criteria based on the pseudoinverse of a matrix. Finally, we devise a causal effect identification algorithm, which accepts as input a collection of marginal, conditional, and interventional distributions, integrating enriched matrix-based criteria into a graphical identification approach.
| null |
Private and Non-private Uniformity Testing for Ranking Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/4eb0194ddf4d6c7a72dca4fd3149e92e-Abstract.html
|
Róbert Busa-Fekete, Dimitris Fotakis, Emmanouil Zampetakis
|
https://papers.nips.cc/paper_files/paper/2021/hash/4eb0194ddf4d6c7a72dca4fd3149e92e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12349-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4eb0194ddf4d6c7a72dca4fd3149e92e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lMrwT4C93eT
|
https://papers.nips.cc/paper_files/paper/2021/file/4eb0194ddf4d6c7a72dca4fd3149e92e-Supplemental.zip
|
We study the problem of uniformity testing for statistical data that consists of rankings over $m$ items where the alternative class is restricted to Mallows models with single parameter. Testing ranking data is challenging because of the size of the large domain that is factorial in $m$, therefore the tester needs to take advantage of some structure of the alternative class. We show that uniform distribution can be distinguished from Mallows model with $O(m^{-1/2})$ samples based on simple pairwise statistics, which allows us to test uniformity using only two samples, if $m$ is large enough. We also consider uniformity testing with central and locally differential private (DP) constraints. We present a central DP algorithm that requires $O\left(\max \{ 1/\epsilon_0, 1/\sqrt{m} \} \right)$ where $\epsilon_0$ is the privacy budget parameter. Interestingly, our uniformity testing algorithm is straightforward to apply in the local DP scenario by its nature, since it works with binary statistics that is extracted from the ranking data. We carry out large-scale experiments, including $m=10000$, to show that these testing algorithms scales very gracefully with the number of items.
| null |
Model-Based Reinforcement Learning via Imagination with Derived Memory
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ebccfb3e317c7789f04f7a558df4537-Abstract.html
|
Yao Mu, Yuzheng Zhuang, Bin Wang, Guangxiang Zhu, Wulong Liu, Jianyu Chen, Ping Luo, Shengbo Li, Chongjie Zhang, Jianye Hao
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ebccfb3e317c7789f04f7a558df4537-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12350-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4ebccfb3e317c7789f04f7a558df4537-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jeATherHHGj
|
https://papers.nips.cc/paper_files/paper/2021/file/4ebccfb3e317c7789f04f7a558df4537-Supplemental.pdf
|
Model-based reinforcement learning aims to improve the sample efficiency of policy learning by modeling the dynamics of the environment. Recently, the latent dynamics model is further developed to enable fast planning in a compact space. It summarizes the high-dimensional experiences of an agent, which mimics the memory function of humans. Learning policies via imagination with the latent model shows great potential for solving complex tasks. However, only considering memories from the true experiences in the process of imagination could limit its advantages. Inspired by the memory prosthesis proposed by neuroscientists, we present a novel model-based reinforcement learning framework called Imagining with Derived Memory (IDM). It enables the agent to learn policy from enriched diverse imagination with prediction-reliability weight, thus improving sample efficiency and policy robustness. Experiments on various high-dimensional visual control tasks in the DMControl benchmark demonstrate that IDM outperforms previous state-of-the-art methods in terms of policy robustness and further improves the sample efficiency of the model-based method.
| null |
Compositional Transformers for Scene Generation
|
https://papers.nips.cc/paper_files/paper/2021/hash/4eff0720836a198b6174eecf02cbfdbf-Abstract.html
|
Dor Arad Hudson, Larry Zitnick
|
https://papers.nips.cc/paper_files/paper/2021/hash/4eff0720836a198b6174eecf02cbfdbf-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12351-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4eff0720836a198b6174eecf02cbfdbf-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YQeWoRnwTnE
|
https://papers.nips.cc/paper_files/paper/2021/file/4eff0720836a198b6174eecf02cbfdbf-Supplemental.pdf
|
We introduce the GANformer2 model, an iterative object-oriented transformer, explored for the task of generative modeling. The network incorporates strong and explicit structural priors, to reflect the compositional nature of visual scenes, and synthesizes images through a sequential process. It operates in two stages: a fast and lightweight planning phase, where we draft a high-level scene layout, followed by an attention-based execution phase, where the layout is being refined, evolving into a rich and detailed picture. Our model moves away from conventional black-box GAN architectures that feature a flat and monolithic latent space towards a transparent design that encourages efficiency, controllability and interpretability. We demonstrate GANformer2's strengths and qualities through a careful evaluation over a range of datasets, from multi-object CLEVR scenes to the challenging COCO images, showing it successfully achieves state-of-the-art performance in terms of visual quality, diversity and consistency. Further experiments demonstrate the model's disentanglement and provide a deeper insight into its generative process, as it proceeds step-by-step from a rough initial sketch, to a detailed layout that accounts for objects' depths and dependencies, and up to the final high-resolution depiction of vibrant and intricate real-world scenes. See https://github.com/dorarad/gansformer for model implementation.
| null |
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f00921114932db3f8662a41b44ee68f-Abstract.html
|
Yuanhao Wang, Ruosong Wang, Sham Kakade
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f00921114932db3f8662a41b44ee68f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12352-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4f00921114932db3f8662a41b44ee68f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WnJXcebN7hX
|
https://papers.nips.cc/paper_files/paper/2021/file/4f00921114932db3f8662a41b44ee68f-Supplemental.pdf
|
A fundamental question in the theory of reinforcement learning is: suppose the optimal $Q$-function lies in the linear span of a given $d$ dimensional feature mapping, is sample-efficient reinforcement learning (RL) possible? The recent and remarkable result of Weisz et al. (2020) resolves this question in the negative, providing an exponential (in $d$) sample size lower bound, which holds even if the agent has access to a generative model of the environment. One may hope that such a lower can be circumvented with an even stronger assumption that there is a \emph{constant gap} between the optimal $Q$-value of the best action and that of the second-best action (for all states); indeed, the construction in Weisz et al. (2020) relies on having an exponentially small gap. This work resolves this subsequent question, showing that an exponential sample complexity lower bound still holds even if a constant gap is assumed. Perhaps surprisingly, this result implies an exponential separation between the online RL setting and the generative model setting, where sample-efficient RL is in fact possible in the latter setting with a constant gap. Complementing our negative hardness result, we give two positive results showing that provably sample-efficient RL is possible either under an additional low-variance assumption or under a novel hypercontractivity assumption.
| null |
Combating Noise: Semi-supervised Learning by Region Uncertainty Quantification
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f16c818875d9fcb6867c7bdc89be7eb-Abstract.html
|
Zhenyu Wang, Ya-Li Li, Ye Guo, Shengjin Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f16c818875d9fcb6867c7bdc89be7eb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12353-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4f16c818875d9fcb6867c7bdc89be7eb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=m7XHyicfGTq
|
https://papers.nips.cc/paper_files/paper/2021/file/4f16c818875d9fcb6867c7bdc89be7eb-Supplemental.pdf
|
Semi-supervised learning aims to leverage a large amount of unlabeled data for performance boosting. Existing works primarily focus on image classification. In this paper, we delve into semi-supervised learning for object detection, where labeled data are more labor-intensive to collect. Current methods are easily distracted by noisy regions generated by pseudo labels. To combat the noisy labeling, we propose noise-resistant semi-supervised learning by quantifying the region uncertainty. We first investigate the adverse effects brought by different forms of noise associated with pseudo labels. Then we propose to quantify the uncertainty of regions by identifying the noise-resistant properties of regions over different strengths. By importing the region uncertainty quantification and promoting multi-peak probability distribution output, we introduce uncertainty into training and further achieve noise-resistant learning. Experiments on both PASCAL VOC and MS COCO demonstrate the extraordinary performance of our method.
| null |
Reducing the Covariate Shift by Mirror Samples in Cross Domain Alignment
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f284803bd0966cc24fa8683a34afc6e-Abstract.html
|
Yin Zhao, minquan wang, Longjun Cai
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f284803bd0966cc24fa8683a34afc6e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12354-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4f284803bd0966cc24fa8683a34afc6e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ur2B8gSfZm3
|
https://papers.nips.cc/paper_files/paper/2021/file/4f284803bd0966cc24fa8683a34afc6e-Supplemental.zip
|
Eliminating the covariate shift cross domains is one of the common methods to deal with the issue of domain shift in visual unsupervised domain adaptation. However, current alignment methods, especially the prototype based or sample-level based methods neglect the structural properties of the underlying distribution and even break the condition of covariate shift. To relieve the limitations and conflicts, we introduce a novel concept named (virtual) mirror, which represents the equivalent sample in another domain. The equivalent sample pairs, named mirror pairs reflect the natural correspondence of the empirical distributions. Then a mirror loss, which aligns the mirror pairs cross domains, is constructed to enhance the alignment of the domains. The proposed method does not distort the internal structure of the underlying distribution. We also provide theoretical proof that the mirror samples and mirror loss have better asymptotic properties in reducing the domain shift. By applying the virtual mirror and mirror loss to the generic unsupervised domain adaptation model, we achieved consistently superior performance on several mainstream benchmarks.
| null |
Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f3d7d38d24b740c95da2b03dc3a2333-Abstract.html
|
Robin Winter, Frank Noe, Djork-Arné Clevert
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f3d7d38d24b740c95da2b03dc3a2333-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12355-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4f3d7d38d24b740c95da2b03dc3a2333-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pTmYjQadg9
|
https://papers.nips.cc/paper_files/paper/2021/file/4f3d7d38d24b740c95da2b03dc3a2333-Supplemental.pdf
|
Recently, there has been great success in applying deep neural networks on graph structured data. Most work, however, focuses on either node- or graph-level supervised learning, such as node, link or graph classification or node-level unsupervised learning (e.g. node clustering). Despite its wide range of possible applications, graph-level unsupervised learning has not received much attention yet. This might be mainly attributed to the high representation complexity of graphs, which can be represented by $n!$ equivalent adjacency matrices, where $n$ is the number of nodes.In this work we address this issue by proposing a permutation-invariant variational autoencoder for graph structured data. Our proposed model indirectly learns to match the node ordering of input and output graph, without imposing a particular node ordering or performing expensive graph matching. We demonstrate the effectiveness of our proposed model for graph reconstruction, generation and interpolation and evaluate the expressive power of extracted representations for downstream graph-level classification and regression.
| null |
Causal Abstractions of Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f5c422f4d49a5a807eda27434231040-Abstract.html
|
Atticus Geiger, Hanson Lu, Thomas Icard, Christopher Potts
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f5c422f4d49a5a807eda27434231040-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12356-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4f5c422f4d49a5a807eda27434231040-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RmuXDtjDhG
|
https://papers.nips.cc/paper_files/paper/2021/file/4f5c422f4d49a5a807eda27434231040-Supplemental.pdf
|
Structural analysis methods (e.g., probing and feature attribution) are increasingly important tools for neural network analysis. We propose a new structural analysis method grounded in a formal theory of causal abstraction that provides rich characterizations of model-internal representations and their roles in input/output behavior. In this method, neural representations are aligned with variables in interpretable causal models, and then interchange interventions are used to experimentally verify that the neural representations have the causal properties of their aligned variables. We apply this method in a case study to analyze neural models trained on Multiply Quantified Natural Language Inference (MQNLI) corpus, a highly complex NLI dataset that was constructed with a tree-structured natural logic causal model. We discover that a BERT-based model with state-of-the-art performance successfully realizes parts of the natural logic model’s causal structure, whereas a simpler baseline model fails to show any such structure, demonstrating that neural representations encode the compositional structure of MQNLI examples.
| null |
Conic Blackwell Algorithm: Parameter-Free Convex-Concave Saddle-Point Solving
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f87658ef0de194413056248a00ce009-Abstract.html
|
Julien Grand-Clément, Christian Kroer
|
https://papers.nips.cc/paper_files/paper/2021/hash/4f87658ef0de194413056248a00ce009-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12357-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4f87658ef0de194413056248a00ce009-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5BVsfC0goqI
|
https://papers.nips.cc/paper_files/paper/2021/file/4f87658ef0de194413056248a00ce009-Supplemental.pdf
|
We develop new parameter-free and scale-free algorithms for solving convex-concave saddle-point problems. Our results are based on a new simple regret minimizer, the Conic Blackwell Algorithm$^+$ (CBA$^+$), which attains $O(1/\sqrt{T})$ average regret. Intuitively, our approach generalizes to other decision sets of interest ideas from the Counterfactual Regret minimization (CFR$^+$) algorithm, which has very strong practical performance for solving sequential games on simplexes.We show how to implement CBA$^+$ for the simplex, $\ell_{p}$ norm balls, and ellipsoidal confidence regions in the simplex, and we present numerical experiments for solving matrix games and distributionally robust optimization problems.Our empirical results show that CBA$^+$ is a simple algorithm that outperforms state-of-the-art methods on synthetic data and real data instances, without the need for any choice of step sizes or other algorithmic parameters.
| null |
3DP3: 3D Scene Perception via Probabilistic Programming
|
https://papers.nips.cc/paper_files/paper/2021/hash/4fc66104f8ada6257fa55f29a2a567c7-Abstract.html
|
Nishad Gothoskar, Marco Cusumano-Towner, Ben Zinberg, Matin Ghavamizadeh, Falk Pollok, Austin Garrett, Josh Tenenbaum, Dan Gutfreund, Vikash Mansinghka
|
https://papers.nips.cc/paper_files/paper/2021/hash/4fc66104f8ada6257fa55f29a2a567c7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12358-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4fc66104f8ada6257fa55f29a2a567c7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZBhZDNaiww
|
https://papers.nips.cc/paper_files/paper/2021/file/4fc66104f8ada6257fa55f29a2a567c7-Supplemental.pdf
|
We present 3DP3, a framework for inverse graphics that uses inference in a structured generative model of objects, scenes, and images. 3DP3 uses (i) voxel models to represent the 3D shape of objects, (ii) hierarchical scene graphs to decompose scenes into objects and the contacts between them, and (iii) depth image likelihoods based on real-time graphics. Given an observed RGB-D image, 3DP3's inference algorithm infers the underlying latent 3D scene, including the object poses and a parsimonious joint parametrization of these poses, using fast bottom-up pose proposals, novel involutive MCMC updates of the scene graph structure, and, optionally, neural object detectors and pose estimators. We show that 3DP3 enables scene understanding that is aware of 3D shape, occlusion, and contact structure. Our results demonstrate that 3DP3 is more accurate at 6DoF object pose estimation from real images than deep learning baselines and shows better generalization to challenging scenes with novel viewpoints, contact, and partial observability.
| null |
Novel Upper Bounds for the Constrained Most Probable Explanation Task
|
https://papers.nips.cc/paper_files/paper/2021/hash/4fc7e9c4df30aafd8b7e1ab324f27712-Abstract.html
|
Tahrima Rahman, Sara Rouhani, Vibhav Gogate
|
https://papers.nips.cc/paper_files/paper/2021/hash/4fc7e9c4df30aafd8b7e1ab324f27712-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12359-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4fc7e9c4df30aafd8b7e1ab324f27712-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-_D-ss8su3
|
https://papers.nips.cc/paper_files/paper/2021/file/4fc7e9c4df30aafd8b7e1ab324f27712-Supplemental.pdf
|
We propose several schemes for upper bounding the optimal value of the constrained most probable explanation (CMPE) problem. Given a set of discrete random variables, two probabilistic graphical models defined over them and a real number $q$, this problem involves finding an assignment of values to all the variables such that the probability of the assignment is maximized according to the first model and is bounded by $q$ w.r.t. the second model. In prior work, it was shown that CMPE is a unifying problem with several applications and special cases including the nearest assignment problem, the decision preserving most probable explanation task and robust estimation. It was also shown that CMPE is NP-hard even on tractable models such as bounded treewidth networks and is hard for integer linear programming methods because it includes a dense global constraint. The main idea in our approach is to simplify the problem via Lagrange relaxation and decomposition to yield either a knapsack problem or the unconstrained most probable explanation (MPE) problem, and then solving the two problems, respectively using specialized knapsack algorithms and mini-buckets based upper bounding schemes. We evaluate our proposed scheme along several dimensions including quality of the bounds and computation time required on various benchmark graphical models and how it can be used to find heuristic, near-optimal feasible solutions in an example application pertaining to robust estimation and adversarial attacks on classifiers.
| null |
Why Spectral Normalization Stabilizes GANs: Analysis and Improvements
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ffb0d2ba92f664c2281970110a2e071-Abstract.html
|
Zinan Lin, Vyas Sekar, Giulia Fanti
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ffb0d2ba92f664c2281970110a2e071-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12360-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4ffb0d2ba92f664c2281970110a2e071-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MLT9wFYMlJ9
|
https://papers.nips.cc/paper_files/paper/2021/file/4ffb0d2ba92f664c2281970110a2e071-Supplemental.pdf
|
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs). However, current understanding of SN's efficacy is limited. In this work, we show that SN controls two important failure modes of GAN training: exploding and vanishing gradients. Our proofs illustrate a (perhaps unintentional) connection with the successful LeCun initialization. This connection helps to explain why the most popular implementation of SN for GANs requires no hyper-parameter tuning, whereas stricter implementations of SN have poor empirical performance out-of-the-box. Unlike LeCun initialization which only controls gradient vanishing at the beginning of training, SN preserves this property throughout training. Building on this theoretical understanding, we propose a new spectral normalization technique: Bidirectional Scaled Spectral Normalization (BSSN), which incorporates insights from later improvements to LeCun initialization: Xavier initialization and Kaiming initialization. Theoretically, we show that BSSN gives better gradient control than SN. Empirically, we demonstrate that it outperforms SN in sample quality and training stability on several benchmark datasets.
| null |
$(\textrm{Implicit})^2$: Implicit Layers for Implicit Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ffbd5c8221d7c147f8363ccdc9a2a37-Abstract.html
|
Zhichun Huang, Shaojie Bai, J. Zico Kolter
|
https://papers.nips.cc/paper_files/paper/2021/hash/4ffbd5c8221d7c147f8363ccdc9a2a37-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12361-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/4ffbd5c8221d7c147f8363ccdc9a2a37-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=AcoMwAU5c0s
|
https://papers.nips.cc/paper_files/paper/2021/file/4ffbd5c8221d7c147f8363ccdc9a2a37-Supplemental.pdf
|
Recent research in deep learning has investigated two very different forms of ''implicitness'': implicit representations model high-frequency data such as images or 3D shapes directly via a low-dimensional neural network (often using e.g., sinusoidal bases or nonlinearities); implicit layers, in contrast, refer to techniques where the forward pass of a network is computed via non-linear dynamical systems, such as fixed-point or differential equation solutions, with the backward pass computed via the implicit function theorem. In this work, we demonstrate that these two seemingly orthogonal concepts are remarkably well-suited for each other. In particular, we show that by exploiting fixed-point implicit layer to model implicit representations, we can substantially improve upon the performance of the conventional explicit-layer-based approach. Additionally, as implicit representation networks are typically trained in large-batch settings, we propose to leverage the property of implicit layers to amortize the cost of fixed-point forward/backward passes over training steps -- thereby addressing one of the primary challenges with implicit layers (that many iterations are required for the black-box fixed-point solvers). We empirically evaluated our method on learning multiple implicit representations for images, videos and audios, showing that our $(\textrm{Implicit})^2$ approach substantially improve upon existing models while being both faster to train and much more memory efficient.
| null |
Mean-based Best Arm Identification in Stochastic Bandits under Reward Contamination
|
https://papers.nips.cc/paper_files/paper/2021/hash/500ee9106e0e4d8f769fadfdf9f2837e-Abstract.html
|
Arpan Mukherjee, Ali Tajer, Pin-Yu Chen, Payel Das
|
https://papers.nips.cc/paper_files/paper/2021/hash/500ee9106e0e4d8f769fadfdf9f2837e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12362-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/500ee9106e0e4d8f769fadfdf9f2837e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=LKntyz8tp1S
|
https://papers.nips.cc/paper_files/paper/2021/file/500ee9106e0e4d8f769fadfdf9f2837e-Supplemental.zip
|
This paper investigates the problem of best arm identification in {\sl contaminated} stochastic multi-arm bandits. In this setting, the rewards obtained from any arm are replaced by samples from an adversarial model with probability $\varepsilon$. A fixed confidence (infinite-horizon) setting is considered, where the goal of the learner is to identify the arm with the largest mean. Owing to the adversarial contamination of the rewards, each arm's mean is only partially identifiable. This paper proposes two algorithms, a gap-based algorithm and one based on the successive elimination, for best arm identification in sub-Gaussian bandits. These algorithms involve mean estimates that achieve the optimal error guarantee on the deviation of the true mean from the estimate asymptotically. Furthermore, these algorithms asymptotically achieve the optimal sample complexity. Specifically, for the gap-based algorithm, the sample complexity is asymptotically optimal up to constant factors, while for the successive elimination-based algorithm, it is optimal up to logarithmic factors. Finally, numerical experiments are provided to illustrate the gains of the algorithms compared to the existing baselines.
| null |
MADE: Exploration via Maximizing Deviation from Explored Regions
|
https://papers.nips.cc/paper_files/paper/2021/hash/5011bf6d8a37692913fce3a15a51f070-Abstract.html
|
Tianjun Zhang, Paria Rashidinejad, Jiantao Jiao, Yuandong Tian, Joseph E. Gonzalez, Stuart Russell
|
https://papers.nips.cc/paper_files/paper/2021/hash/5011bf6d8a37692913fce3a15a51f070-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12363-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5011bf6d8a37692913fce3a15a51f070-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=DTVfEJIL3DB
|
https://papers.nips.cc/paper_files/paper/2021/file/5011bf6d8a37692913fce3a15a51f070-Supplemental.pdf
|
In online reinforcement learning (RL), efficient exploration remains particularly challenging in high-dimensional environments with sparse rewards. In low-dimensional environments, where tabular parameterization is possible, count-based upper confidence bound (UCB) exploration methods achieve minimax near-optimal rates. However, it remains unclear how to efficiently implement UCB in realistic RL tasks that involve non-linear function approximation. To address this, we propose a new exploration approach via maximizing the deviation of the occupancy of the next policy from the explored regions. We add this term as an adaptive regularizer to the standard RL objective to balance exploration vs. exploitation. We pair the new objective with a provably convergent algorithm, giving rise to a new intrinsic reward that adjusts existing bonuses. The proposed intrinsic reward is easy to implement and combine with other existing RL algorithms to conduct exploration. As a proof of concept, we evaluate the new intrinsic reward on tabular examples across a variety of model-based and model-free algorithms, showing improvements over count-only exploration strategies. When tested on navigation and locomotion tasks from MiniGrid and DeepMind Control Suite benchmarks, our approach significantly improves sample efficiency over state-of-the-art methods.
| null |
Variational Automatic Curriculum Learning for Sparse-Reward Cooperative Multi-Agent Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/503e7dbbd6217b9a591f3322f39b5a6c-Abstract.html
|
Jiayu Chen, Yuanxin Zhang, Yuanfan Xu, Huimin Ma, Huazhong Yang, Jiaming Song, Yu Wang, Yi Wu
|
https://papers.nips.cc/paper_files/paper/2021/hash/503e7dbbd6217b9a591f3322f39b5a6c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12364-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/503e7dbbd6217b9a591f3322f39b5a6c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zlhpIYub2d0
|
https://papers.nips.cc/paper_files/paper/2021/file/503e7dbbd6217b9a591f3322f39b5a6c-Supplemental.pdf
|
We introduce an automatic curriculum algorithm, Variational Automatic Curriculum Learning (VACL), for solving challenging goal-conditioned cooperative multi-agent reinforcement learning problems. We motivate our curriculum learning paradigm through a variational perspective, where the learning objective can be decomposed into two terms: task learning on the current curriculum, and curriculum update to a new task distribution. Local optimization over the second term suggests that the curriculum should gradually expand the training tasks from easy to hard. Our VACL algorithm implements this variational paradigm with two practical components, task expansion and entity curriculum, which produces a series of training tasks over both the task configurations as well as the number of entities in the task. Experiment results show that VACL solves a collection of sparse-reward problems with a large number of agents. Particularly, using a single desktop machine, VACL achieves 98% coverage rate with 100 agents in the simple-spread benchmark and reproduces the ramp-use behavior originally shown in OpenAI’s hide-and-seek project.
| null |
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
|
https://papers.nips.cc/paper_files/paper/2021/hash/505259756244493872b7709a8a01b536-Abstract.html
|
Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, Steven Chu Hong Hoi
|
https://papers.nips.cc/paper_files/paper/2021/hash/505259756244493872b7709a8a01b536-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12365-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/505259756244493872b7709a8a01b536-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OJLaKwiXSbx
|
https://papers.nips.cc/paper_files/paper/2021/file/505259756244493872b7709a8a01b536-Supplemental.pdf
|
Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and models are available at https://github.com/salesforce/ALBEF.
| null |
Variational Model Inversion Attacks
|
https://papers.nips.cc/paper_files/paper/2021/hash/50a074e6a8da4662ae0a29edde722179-Abstract.html
|
Kuan-Chieh Wang, YAN FU, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani
|
https://papers.nips.cc/paper_files/paper/2021/hash/50a074e6a8da4662ae0a29edde722179-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12366-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=c0O9vBVSvIl
|
https://papers.nips.cc/paper_files/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Supplemental.pdf
|
Given the ubiquity of deep neural networks, it is important that these models do not reveal information about sensitive data that they have been trained on. In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy. In order to optimize this variational objective, we choose a variational family defined in the code space of a deep generative model, trained on a public auxiliary dataset that shares some structural similarity with the target dataset. Empirically, our method substantially improves performance in terms of target attack accuracy, sample realism, and diversity on datasets of faces and chest X-ray images.
| null |
Graph Neural Networks with Adaptive Residual
|
https://papers.nips.cc/paper_files/paper/2021/hash/50abc3e730e36b387ca8e02c26dc0a22-Abstract.html
|
Xiaorui Liu, Jiayuan Ding, Wei Jin, Han Xu, Yao Ma, Zitao Liu, Jiliang Tang
|
https://papers.nips.cc/paper_files/paper/2021/hash/50abc3e730e36b387ca8e02c26dc0a22-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12367-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/50abc3e730e36b387ca8e02c26dc0a22-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=hfkER_KJiNw
|
https://papers.nips.cc/paper_files/paper/2021/file/50abc3e730e36b387ca8e02c26dc0a22-Supplemental.pdf
|
Graph neural networks (GNNs) have shown the power in graph representation learning for numerous tasks. In this work, we discover an interesting phenomenon that although residual connections in the message passing of GNNs help improve the performance, they immensely amplify GNNs' vulnerability against abnormal node features. This is undesirable because in real-world applications, node features in graphs could often be abnormal such as being naturally noisy or adversarially manipulated. We analyze possible reasons to understand this phenomenon and aim to design GNNs with stronger resilience to abnormal features. Our understandings motivate us to propose and derive a simple, efficient, interpretable, and adaptive message passing scheme, leading to a novel GNN with Adaptive Residual, AirGNN. Extensive experiments under various abnormal feature scenarios demonstrate the effectiveness of the proposed algorithm.
| null |
Efficient Active Learning for Gaussian Process Classification by Error Reduction
|
https://papers.nips.cc/paper_files/paper/2021/hash/50d2e70cdf7dd05be85e1b8df3f8ced4-Abstract.html
|
Guang Zhao, Edward Dougherty, Byung-Jun Yoon, Francis Alexander, Xiaoning Qian
|
https://papers.nips.cc/paper_files/paper/2021/hash/50d2e70cdf7dd05be85e1b8df3f8ced4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12368-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/50d2e70cdf7dd05be85e1b8df3f8ced4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=B9NOIHl2Z6K
|
https://papers.nips.cc/paper_files/paper/2021/file/50d2e70cdf7dd05be85e1b8df3f8ced4-Supplemental.pdf
|
Active learning sequentially selects the best instance for labeling by optimizing an acquisition function to enhance data/label efficiency. The selection can be either from a discrete instance set (pool-based scenario) or a continuous instance space (query synthesis scenario). In this work, we study both active learning scenarios for Gaussian Process Classification (GPC). The existing active learning strategies that maximize the Estimated Error Reduction (EER) aim at reducing the classification error after training with the new acquired instance in a one-step-look-ahead manner. The computation of EER-based acquisition functions is typically prohibitive as it requires retraining the GPC with every new query. Moreover, as the EER is not smooth, it can not be combined with gradient-based optimization techniques to efficiently explore the continuous instance space for query synthesis. To overcome these critical limitations, we develop computationally efficient algorithms for EER-based active learning with GPC. We derive the joint predictive distribution of label pairs as a one-dimensional integral, as a result of which the computation of the acquisition function avoids retraining the GPC for each query, remarkably reducing the computational overhead. We also derive the gradient chain rule to efficiently calculate the gradient of the acquisition function, which leads to the first query synthesis active learning algorithm implementing EER-based strategies. Our experiments clearly demonstrate the computational efficiency of the proposed algorithms. We also benchmark our algorithms on both synthetic and real-world datasets, which show superior performance in terms of sampling efficiency compared to the existing state-of-the-art algorithms.
| null |
Non-Asymptotic Analysis for Two Time-scale TDC with General Smooth Function Approximation
|
https://papers.nips.cc/paper_files/paper/2021/hash/50e207ab6946b5d78b377ae0144b9e07-Abstract.html
|
Yue Wang, Shaofeng Zou, Yi Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/50e207ab6946b5d78b377ae0144b9e07-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12369-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/50e207ab6946b5d78b377ae0144b9e07-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SBNs7EULzqq
|
https://papers.nips.cc/paper_files/paper/2021/file/50e207ab6946b5d78b377ae0144b9e07-Supplemental.pdf
|
Temporal-difference learning with gradient correction (TDC) is a two time-scale algorithm for policy evaluation in reinforcement learning. This algorithm was initially proposed with linear function approximation, and was later extended to the one with general smooth function approximation. The asymptotic convergence for the on-policy setting with general smooth function approximation was established in [Bhatnagar et al., 2009], however, the non-asymptotic convergence analysis remains unsolved due to challenges in the non-linear and two-time-scale update structure, non-convex objective function and the projection onto a time-varying tangent plane. In this paper, we develop novel techniques to address the above challenges and explicitly characterize the non-asymptotic error bound for the general off-policy setting with i.i.d. or Markovian samples, and show that it converges as fast as $\mathcal O(1/\sqrt T)$ (up to a factor of $\mathcal O(\log T)$). Our approach can be applied to a wide range of value-based reinforcement learning algorithms with general smooth function approximation.
| null |
A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks
|
https://papers.nips.cc/paper_files/paper/2021/hash/50f3f8c42b998a48057e9d33f4144b8b-Abstract.html
|
Jacob Springer, Melanie Mitchell, Garrett Kenyon
|
https://papers.nips.cc/paper_files/paper/2021/hash/50f3f8c42b998a48057e9d33f4144b8b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12370-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/50f3f8c42b998a48057e9d33f4144b8b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=XXxoCgHsiRv
|
https://papers.nips.cc/paper_files/paper/2021/file/50f3f8c42b998a48057e9d33f4144b8b-Supplemental.pdf
|
Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures. However, targeted adversarial examples—optimized to be classified as a chosen target class—tend to be less transferable between architectures. While prior research on constructing transferable targeted attacks has focused on improving the optimization procedure, in this work we examine the role of the source classifier. Here, we show that training the source classifier to be "slightly robust"—that is, robust to small-magnitude adversarial examples—substantially improves the transferability of class-targeted and representation-targeted adversarial attacks, even between architectures as different as convolutional neural networks and transformers. The results we present provide insight into the nature of adversarial examples as well as the mechanisms underlying so-called "robust" classifiers.
| null |
TriBERT: Human-centric Audio-visual Representation Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/51200d29d1fc15f5a71c1dab4bb54f7c-Abstract.html
|
Tanzila Rahman, Mengyu Yang, Leonid Sigal
|
https://papers.nips.cc/paper_files/paper/2021/hash/51200d29d1fc15f5a71c1dab4bb54f7c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12371-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/51200d29d1fc15f5a71c1dab4bb54f7c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YP1ham75vml
|
https://papers.nips.cc/paper_files/paper/2021/file/51200d29d1fc15f5a71c1dab4bb54f7c-Supplemental.pdf
|
The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visual-linguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT -- a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy.
| null |
How does a Neural Network's Architecture Impact its Robustness to Noisy Labels?
|
https://papers.nips.cc/paper_files/paper/2021/hash/51311013e51adebc3c34d2cc591fefee-Abstract.html
|
Jingling Li, Mozhi Zhang, Keyulu Xu, John Dickerson, Jimmy Ba
|
https://papers.nips.cc/paper_files/paper/2021/hash/51311013e51adebc3c34d2cc591fefee-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12372-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/51311013e51adebc3c34d2cc591fefee-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ir-WwGboFN-
|
https://papers.nips.cc/paper_files/paper/2021/file/51311013e51adebc3c34d2cc591fefee-Supplemental.pdf
|
Noisy labels are inevitable in large real-world datasets. In this work, we explore an area understudied by previous works --- how the network's architecture impacts its robustness to noisy labels. We provide a formal framework connecting the robustness of a network to the alignments between its architecture and target/noise functions. Our framework measures a network's robustness via the predictive power in its representations --- the test performance of a linear model trained on the learned representations using a small set of clean labels. We hypothesize that a network is more robust to noisy labels if its architecture is more aligned with the target function than the noise. To support our hypothesis, we provide both theoretical and empirical evidence across various neural network architectures and different domains. We also find that when the network is well-aligned with the target function, its predictive power in representations could improve upon state-of-the-art (SOTA) noisy-label-training methods in terms of test accuracy and even outperform sophisticated methods that use clean labels.
| null |
Calibration and Consistency of Adversarial Surrogate Losses
|
https://papers.nips.cc/paper_files/paper/2021/hash/514a70448c235ccb8b6842ef5e02ad3b-Abstract.html
|
Pranjal Awasthi, Natalie Frank, Anqi Mao, Mehryar Mohri, Yutao Zhong
|
https://papers.nips.cc/paper_files/paper/2021/hash/514a70448c235ccb8b6842ef5e02ad3b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12373-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/514a70448c235ccb8b6842ef5e02ad3b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sNw3VBPL7rg
|
https://papers.nips.cc/paper_files/paper/2021/file/514a70448c235ccb8b6842ef5e02ad3b-Supplemental.pdf
|
Adversarial robustness is an increasingly critical property of classifiers in applications. The design of robust algorithms relies on surrogate losses since the optimization of the adversarial loss with most hypothesis sets is NP-hard. But, which surrogate losses should be used and when do they benefit from theoretical guarantees? We present an extensive study of this question, including a detailed analysis of the $\mathcal{H}$-calibration and $\mathcal{H}$-consistency of adversarial surrogate losses. We show that convex loss functions, or the supremum-based convex losses often used in applications, are not $\mathcal{H}$-calibrated for common hypothesis sets used in machine learning. We then give a characterization of $\mathcal{H}$-calibration and prove that some surrogate losses are indeed $\mathcal{H}$-calibrated for the adversarial zero-one loss, with common hypothesis sets. In particular, we fix some calibration results presented in prior work for a family of linear models and significantly generalize the results to the nonlinear hypothesis sets. Next, we show that $\mathcal{H}$-calibration is not sufficient to guarantee consistency and prove that, in the absence of any distributional assumption, no continuous surrogate loss is consistent in the adversarial setting. This, in particular, proves that a claim made in prior work is inaccurate. Next, we identify natural conditions under which some surrogate losses that we describe in detail are $\mathcal{H}$-consistent. We also report a series of empirical results which show that many $\mathcal{H}$-calibrated surrogate losses are indeed not $\mathcal{H}$-consistent, and validate our theoretical assumptions. Our adversarial $\mathcal{H}$-consistency results are novel, even for the case where $\mathcal{H}$ is the family of all measurable functions.
| null |
The Value of Information When Deciding What to Learn
|
https://papers.nips.cc/paper_files/paper/2021/hash/517da335fd0ec2f4a25ea139d5494163-Abstract.html
|
Dilip Arumugam, Benjamin Van Roy
|
https://papers.nips.cc/paper_files/paper/2021/hash/517da335fd0ec2f4a25ea139d5494163-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12374-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/517da335fd0ec2f4a25ea139d5494163-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BR5bZGhzrel
|
https://papers.nips.cc/paper_files/paper/2021/file/517da335fd0ec2f4a25ea139d5494163-Supplemental.pdf
|
All sequential decision-making agents explore so as to acquire knowledge about a particular target. It is often the responsibility of the agent designer to construct this target which, in rich and complex environments, constitutes a onerous burden; without full knowledge of the environment itself, a designer may forge a sub-optimal learning target that poorly balances the amount of information an agent must acquire to identify the target against the target's associated performance shortfall. While recent work has developed a connection between learning targets and rate-distortion theory to address this challenge and empower agents that decide what to learn in an automated fashion, the proposed algorithm does not optimally tackle the equally important challenge of efficient information acquisition. In this work, building upon the seminal design principle of information-directed sampling (Russo & Van Roy, 2014), we address this shortcoming directly to couple optimal information acquisition with the optimal design of learning targets. Along the way, we offer new insights into learning targets from the literature on rate-distortion theory before turning to empirical results that confirm the value of information when deciding what to learn.
| null |
Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/517f24c02e620d5a4dac1db388664a63-Abstract.html
|
Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang (Shane) Gu
|
https://papers.nips.cc/paper_files/paper/2021/hash/517f24c02e620d5a4dac1db388664a63-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12375-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/517f24c02e620d5a4dac1db388664a63-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vLyI__SoeAe
|
https://papers.nips.cc/paper_files/paper/2021/file/517f24c02e620d5a4dac1db388664a63-Supplemental.pdf
|
Recently many algorithms were devised for reinforcement learning (RL) with function approximation. While they have clear algorithmic distinctions, they also have many implementation differences that are algorithm-independent and sometimes under-emphasized. Such mixing of algorithmic novelty and implementation craftsmanship makes rigorous analyses of the sources of performance improvements across algorithms difficult. In this work, we focus on a series of off-policy inference-based actor-critic algorithms -- MPO, AWR, and SAC -- to decouple their algorithmic innovations and implementation decisions. We present unified derivations through a single control-as-inference objective, where we can categorize each algorithm as based on either Expectation-Maximization (EM) or direct Kullback-Leibler (KL) divergence minimization and treat the rest of specifications as implementation details. We performed extensive ablation studies, and identified substantial performance drops whenever implementation details are mismatched for algorithmic choices. These results show which implementation or code details are co-adapted and co-evolved with algorithms, and which are transferable across algorithms: as examples, we identified that tanh Gaussian policy and network sizes are highly adapted to algorithmic types, while layer normalization and ELU are critical for MPO's performances but also transfer to noticeable gains in SAC. We hope our work can inspire future work to further demystify sources of performance improvements across multiple algorithms and allow researchers to build on one another's both algorithmic and implementational innovations.
| null |
Can fMRI reveal the representation of syntactic structure in the brain?
|
https://papers.nips.cc/paper_files/paper/2021/hash/51a472c08e21aef54ed749806e3e6490-Abstract.html
|
Aniketh Janardhan Reddy, Leila Wehbe
|
https://papers.nips.cc/paper_files/paper/2021/hash/51a472c08e21aef54ed749806e3e6490-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12376-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/51a472c08e21aef54ed749806e3e6490-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fCjd2bXG5iI
|
https://papers.nips.cc/paper_files/paper/2021/file/51a472c08e21aef54ed749806e3e6490-Supplemental.zip
|
While studying semantics in the brain, neuroscientists use two approaches. One is to identify areas that are correlated with semantic processing load. Another is to find areas that are predicted by the semantic representation of the stimulus words. However, most studies of syntax have focused only on identifying areas correlated with syntactic processing load. One possible reason for this discrepancy is that representing syntactic structure in an embedding space such that it can be used to model brain activity is a non-trivial computational problem. Another possible reason is that it is unclear if the low signal-to-noise ratio of neuroimaging tools such as functional Magnetic Resonance Imaging (fMRI) can allow us to reveal the correlates of complex (and perhaps subtle) syntactic representations. In this study, we propose novel multi-dimensional features that encode information about the syntactic structure of sentences. Using these features and fMRI recordings of participants reading a natural text, we model the brain representation of syntax. First, we find that our syntactic structure-based features explain additional variance in the brain activity of various parts of the language system, even after controlling for complexity metrics that capture processing load. At the same time, we see that regions well-predicted by syntactic features are distributed in the language system and are not distinguishable from those processing semantics. Our code and data will be available at https://github.com/anikethjr/brainsyntacticrepresentations.
| null |
Robust Implicit Networks via Non-Euclidean Contractions
|
https://papers.nips.cc/paper_files/paper/2021/hash/51a6ce0252d8fa6e913524bdce8db490-Abstract.html
|
Saber Jafarpour, Alexander Davydov, Anton Proskurnikov, Francesco Bullo
|
https://papers.nips.cc/paper_files/paper/2021/hash/51a6ce0252d8fa6e913524bdce8db490-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12377-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/51a6ce0252d8fa6e913524bdce8db490-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SwfsoPuGYku
|
https://papers.nips.cc/paper_files/paper/2021/file/51a6ce0252d8fa6e913524bdce8db490-Supplemental.zip
|
Implicit neural networks, a.k.a., deep equilibrium networks, are a class of implicit-depth learning models where function evaluation is performed by solving a fixed point equation. They generalize classic feedforward models and are equivalent to infinite-depth weight-tied feedforward networks. While implicit models show improved accuracy and significant reduction in memory consumption, they can suffer from ill-posedness and convergence instability.This paper provides a new framework, which we call Non-Euclidean Monotone Operator Network (NEMON), to design well-posed and robust implicit neural networks based upon contraction theory for the non-Euclidean norm $\ell_\infty$. Our framework includes (i) a novel condition for well-posedness based on one-sided Lipschitz constants, (ii) an average iteration for computing fixed-points, and (iii) explicit estimates on input-output Lipschitz constants. Additionally, we design a training problem with the well-posedness condition and the average iteration as constraints and, to achieve robust models, with the input-output Lipschitz constant as a regularizer. Our $\ell_\infty$ well-posedness condition leads to a larger polytopic training search space than existing conditions and our average iteration enjoys accelerated convergence. Finally, we evaluate our framework in image classification through the MNIST and the CIFAR-10 datasets. Our numerical results demonstrate improved accuracy and robustness of the implicit models with smaller input-output Lipschitz bounds. Code is available at https://github.com/davydovalexander/Non-Euclidean_Mon_Op_Net.
| null |
A Kernel-based Test of Independence for Cluster-correlated Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/51be2fed6c55f5aa0c16ff14c140b187-Abstract.html
|
Hongjiao Liu, Anna Plantinga, Yunhua Xiang, Michael Wu
|
https://papers.nips.cc/paper_files/paper/2021/hash/51be2fed6c55f5aa0c16ff14c140b187-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12378-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/51be2fed6c55f5aa0c16ff14c140b187-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MOkjFCuLxlc
|
https://papers.nips.cc/paper_files/paper/2021/file/51be2fed6c55f5aa0c16ff14c140b187-Supplemental.pdf
|
The Hilbert-Schmidt Independence Criterion (HSIC) is a powerful kernel-based statistic for assessing the generalized dependence between two multivariate variables. However, independence testing based on the HSIC is not directly possible for cluster-correlated data. Such a correlation pattern among the observations arises in many practical situations, e.g., family-based and longitudinal data, and requires proper accommodation. Therefore, we propose a novel HSIC-based independence test to evaluate the dependence between two multivariate variables based on cluster-correlated data. Using the previously proposed empirical HSIC as our test statistic, we derive its asymptotic distribution under the null hypothesis of independence between the two variables but in the presence of sample correlation. Based on both simulation studies and real data analysis, we show that, with clustered data, our approach effectively controls type I error and has a higher statistical power than competing methods.
| null |
Efficient methods for Gaussian Markov random fields under sparse linear constraints
|
https://papers.nips.cc/paper_files/paper/2021/hash/51e6d6e679953c6311757004d8cbbba9-Abstract.html
|
David Bolin, Jonas Wallin
|
https://papers.nips.cc/paper_files/paper/2021/hash/51e6d6e679953c6311757004d8cbbba9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12379-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/51e6d6e679953c6311757004d8cbbba9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=d2CejHDZJh
|
https://papers.nips.cc/paper_files/paper/2021/file/51e6d6e679953c6311757004d8cbbba9-Supplemental.pdf
|
Methods for inference and simulation of linearly constrained Gaussian Markov Random Fields (GMRF) are computationally prohibitive when the number of constraints is large. In some cases, such as for intrinsic GMRFs, they may even be unfeasible. We propose a new class of methods to overcome these challenges in the common case of sparse constraints, where one has a large number of constraints and each only involves a few elements. Our methods rely on a basis transformation into blocks of constrained versus non-constrained subspaces, and we show that the methods greatly outperform existing alternatives in terms of computational cost. By combining the proposed methods with the stochastic partial differential equation approach for Gaussian random fields, we also show how to formulate Gaussian process regression with linear constraints in a GMRF setting to reduce computational cost. This is illustrated in two applications with simulated data.
| null |
Sparse is Enough in Scaling Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/51f15efdd170e6043fa02a74882f0470-Abstract.html
|
Sebastian Jaszczur, Aakanksha Chowdhery, Afroz Mohiuddin, LUKASZ KAISER, Wojciech Gajewski, Henryk Michalewski, Jonni Kanerva
|
https://papers.nips.cc/paper_files/paper/2021/hash/51f15efdd170e6043fa02a74882f0470-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12380-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/51f15efdd170e6043fa02a74882f0470-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-b5OSCydOMe
|
https://papers.nips.cc/paper_files/paper/2021/file/51f15efdd170e6043fa02a74882f0470-Supplemental.pdf
|
Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach. We address this problem by leveraging sparsity. We study sparse variants for all layers in the Transformer and propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer as we scale up the model size. Surprisingly, the sparse layers are enough to obtain the same perplexity as the standard Transformer with the same number of parameters. We also integrate with prior sparsity approaches to attention and enable fast inference on long sequences even with limited memory. This results in performance competitive to the state-of-the-art on long text summarization.
| null |
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
|
https://papers.nips.cc/paper_files/paper/2021/hash/5227b6aaf294f5f027273aebf16015f2-Abstract.html
|
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
|
https://papers.nips.cc/paper_files/paper/2021/hash/5227b6aaf294f5f027273aebf16015f2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12381-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5227b6aaf294f5f027273aebf16015f2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MNVjrDpu6Yo
|
https://papers.nips.cc/paper_files/paper/2021/file/5227b6aaf294f5f027273aebf16015f2-Supplemental.pdf
|
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods with ResNet-50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet.
| null |
Low-Fidelity Video Encoder Optimization for Temporal Action Localization
|
https://papers.nips.cc/paper_files/paper/2021/hash/522a9ae9a99880d39e5daec35375e999-Abstract.html
|
Mengmeng Xu, Juan Manuel Perez Rua, Xiatian Zhu, Bernard Ghanem, Brais Martinez
|
https://papers.nips.cc/paper_files/paper/2021/hash/522a9ae9a99880d39e5daec35375e999-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12382-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/522a9ae9a99880d39e5daec35375e999-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=oqKC5A7iq_k
|
https://papers.nips.cc/paper_files/paper/2021/file/522a9ae9a99880d39e5daec35375e999-Supplemental.pdf
|
Most existing temporal action localization (TAL) methods rely on a transfer learning pipeline: by first optimizing a video encoder on a large action classification dataset (i.e., source domain), followed by freezing the encoder and training a TAL head on the action localization dataset (i.e., target domain). This results in a task discrepancy problem for the video encoder – trained for action classification, but used for TAL. Intuitively, joint optimization with both the video encoder and TAL head is a strong baseline solution to this discrepancy. However, this is not operable for TAL subject to the GPU memory constraints, due to the prohibitive computational cost in processing long untrimmed videos. In this paper, we resolve this challenge by introducing a novel low-fidelity (LoFi) video encoder optimization method. Instead of always using the full training configurations in TAL learning, we propose to reduce the mini-batch composition in terms of temporal, spatial, or spatio-temporal resolution so that jointly optimizing the video encoder and TAL head becomes operable under the same memory conditions of a mid-range hardware budget. Crucially, this enables the gradients to flow backwards through the video encoder conditioned on a TAL supervision loss, favourably solving the task discrepancy problem and providing more effective feature representations. Extensive experiments show that the proposed LoFi optimization approach can significantly enhance the performance of existing TAL methods. Encouragingly, even with a lightweight ResNet18 based video encoder in a single RGB stream, our method surpasses two-stream (RGB + optical-flow) ResNet50 based alternatives, often by a good margin. Our code is publicly available at https://github.com/saic-fi/lofiactionlocalization.
| null |
On Provable Benefits of Depth in Training Graph Convolutional Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/524265e8b942930fbbe8a5d979d29205-Abstract.html
|
Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
|
https://papers.nips.cc/paper_files/paper/2021/hash/524265e8b942930fbbe8a5d979d29205-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12383-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/524265e8b942930fbbe8a5d979d29205-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=r-oRRT-ElX
|
https://papers.nips.cc/paper_files/paper/2021/file/524265e8b942930fbbe8a5d979d29205-Supplemental.pdf
|
Graph Convolutional Networks (GCNs) are known to suffer from performance degradation as the number of layers increases, which is usually attributed to over-smoothing. Despite the apparent consensus, we observe that there exists a discrepancy between the theoretical understanding of over-smoothing and the practical capabilities of GCNs. Specifically, we argue that over-smoothing does not necessarily happen in practice, a deeper model is provably expressive, can converge to global optimum with linear convergence rate, and achieve very high training accuracy as long as properly trained. Despite being capable of achieving high training accuracy, empirical results show that the deeper models generalize poorly on the testing stage and existing theoretical understanding of such behavior remains elusive. To achieve better understanding, we carefully analyze the generalization capability of GCNs, and show that the training strategies to achieve high training accuracy significantly deteriorate the generalization capability of GCNs. Motivated by these findings, we propose a decoupled structure for GCNs that detaches weight matrices from feature propagation to preserve the expressive power and ensure good generalization performance. We conduct empirical evaluations on various synthetic and real-world datasets to validate the correctness of our theory.
| null |
Practical Near Neighbor Search via Group Testing
|
https://papers.nips.cc/paper_files/paper/2021/hash/5248e5118c84beea359b6ea385393661-Abstract.html
|
Joshua Engels, Benjamin Coleman, Anshumali Shrivastava
|
https://papers.nips.cc/paper_files/paper/2021/hash/5248e5118c84beea359b6ea385393661-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12384-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5248e5118c84beea359b6ea385393661-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ebIORrYImx
|
https://papers.nips.cc/paper_files/paper/2021/file/5248e5118c84beea359b6ea385393661-Supplemental.pdf
|
We present a new algorithm for the approximate near neighbor problem that combines classical ideas from group testing with locality-sensitive hashing (LSH). We reduce the near neighbor search problem to a group testing problem by designating neighbors as "positives," non-neighbors as "negatives," and approximate membership queries as group tests. We instantiate this framework using distance-sensitive Bloom Filters to Identify Near-Neighbor Groups (FLINNG). We prove that FLINNG has sub-linear query time and show that our algorithm comes with a variety of practical advantages. For example, FLINNG can be constructed in a single pass through the data, consists entirely of efficient integer operations, and does not require any distance computations. We conduct large-scale experiments on high-dimensional search tasks such as genome search, URL similarity search, and embedding search over the massive YFCC100M dataset. In our comparison with leading algorithms such as HNSW and FAISS, we find that FLINNG can provide up to a 10x query speedup with substantially smaller indexing time and memory.
| null |
Baby Intuitions Benchmark (BIB): Discerning the goals, preferences, and actions of others
|
https://papers.nips.cc/paper_files/paper/2021/hash/525b8410cc8612283c9ecaf9a319f8ed-Abstract.html
|
Kanishk Gandhi, Gala Stojnic, Brenden M. Lake, Moira R Dillon
|
https://papers.nips.cc/paper_files/paper/2021/hash/525b8410cc8612283c9ecaf9a319f8ed-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12385-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/525b8410cc8612283c9ecaf9a319f8ed-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=TFEFvU0ZV6Q
|
https://papers.nips.cc/paper_files/paper/2021/file/525b8410cc8612283c9ecaf9a319f8ed-Supplemental.pdf
|
To achieve human-like common sense about everyday life, machine learning systems must understand and reason about the goals, preferences, and actions of other agents in the environment. By the end of their first year of life, human infants intuitively achieve such common sense, and these cognitive achievements lay the foundation for humans' rich and complex understanding of the mental states of others. Can machines achieve generalizable, commonsense reasoning about other agents like human infants? The Baby Intuitions Benchmark (BIB) challenges machines to predict the plausibility of an agent's behavior based on the underlying causes of its actions. Because BIB's content and paradigm are adopted from developmental cognitive science, BIB allows for direct comparison between human and machine performance. Nevertheless, recently proposed, deep-learning-based agency reasoning models fail to show infant-like reasoning, leaving BIB an open challenge.
| null |
Neural Hybrid Automata: Learning Dynamics With Multiple Modes and Stochastic Transitions
|
https://papers.nips.cc/paper_files/paper/2021/hash/5291822d0636dc429e80e953c58b6a76-Abstract.html
|
Michael Poli, Stefano Massaroli, Luca Scimeca, Sanghyuk Chun, Seong Joon Oh, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg
|
https://papers.nips.cc/paper_files/paper/2021/hash/5291822d0636dc429e80e953c58b6a76-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12386-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5291822d0636dc429e80e953c58b6a76-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=PZ3TnxaC-PT
|
https://papers.nips.cc/paper_files/paper/2021/file/5291822d0636dc429e80e953c58b6a76-Supplemental.pdf
|
Effective control and prediction of dynamical systems require appropriate handling of continuous-time and discrete, event-triggered processes. Stochastic hybrid systems (SHSs), common across engineering domains, provide a formalism for dynamical systems subject to discrete, possibly stochastic, state jumps and multi-modal continuous-time flows. Despite the versatility and importance of SHSs across applications, a general procedure for the explicit learning of both discrete events and multi-mode continuous dynamics remains an open problem. This work introduces Neural Hybrid Automata (NHAs), a recipe for learning SHS dynamics without a priori knowledge on the number, mode parameters, and inter-modal transition dynamics. NHAs provide a systematic inference method based on normalizing flows, neural differential equations, and self-supervision. We showcase NHAs on several tasks, including mode recovery and flow learning in systems with stochastic transitions, and end-to-end learning of hierarchical robot controllers.
| null |
Fast Projection onto the Capped Simplex with Applications to Sparse Regression in Bioinformatics
|
https://papers.nips.cc/paper_files/paper/2021/hash/52aaa62e71f829d41d74892a18a11d59-Abstract.html
|
Man Shun Ang, Jianzhu Ma, Nianjun Liu, Kun Huang, Yijie Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/52aaa62e71f829d41d74892a18a11d59-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12387-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/52aaa62e71f829d41d74892a18a11d59-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5af9FHClUZu
|
https://papers.nips.cc/paper_files/paper/2021/file/52aaa62e71f829d41d74892a18a11d59-Supplemental.pdf
|
We consider the problem of projecting a vector onto the so-called k-capped simplex, which is a hyper-cube cut by a hyperplane.For an n-dimensional input vector with bounded elements, we found that a simple algorithm based on Newton's method is able to solve the projection problem to high precision with a complexity roughly about O(n), which has a much lower computational cost compared with the existing sorting-based methods proposed in the literature.We provide a theory for partial explanation and justification of the method.We demonstrate that the proposed algorithm can produce a solution of the projection problem with high precision on large scale datasets, and the algorithm is able to significantly outperform the state-of-the-art methods in terms of runtime (about 6-8 times faster than a commercial software with respect to CPU time for input vector with 1 million variables or more).We further illustrate the effectiveness of the proposed algorithm on solving sparse regression in a bioinformatics problem.Empirical results on the GWAS dataset (with 1,500,000 single-nucleotide polymorphisms) show that, when using the proposed method to accelerate the Projected Quasi-Newton (PQN) method, the accelerated PQN algorithm is able to handle huge-scale regression problem and it is more efficient (about 3-6 times faster) than the current state-of-the-art methods.
| null |
The Many Faces of Adversarial Risk
|
https://papers.nips.cc/paper_files/paper/2021/hash/52c4608c2f126708211b9e0a60eaf050-Abstract.html
|
Muni Sreenivas Pydi, Varun Jog
|
https://papers.nips.cc/paper_files/paper/2021/hash/52c4608c2f126708211b9e0a60eaf050-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12388-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/52c4608c2f126708211b9e0a60eaf050-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-8QSntMuqBV
|
https://papers.nips.cc/paper_files/paper/2021/file/52c4608c2f126708211b9e0a60eaf050-Supplemental.pdf
|
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data. Numerous definitions of adversarial risk---not all mathematically rigorous and differing subtly in the details---have appeared in the literature. In this paper, we revisit these definitions, make them rigorous, and critically examine their similarities and differences. Our technical tools derive from optimal transport, robust statistics, functional analysis, and game theory. Our contributions include the following: generalizing Strassen’s theorem to the unbalanced optimal transport setting with applications to adversarial classification with unequal priors; showing an equivalence between adversarial robustness and robust hypothesis testing with $\infty$-Wasserstein uncertainty sets; proving the existence of a pure Nash equilibrium in the two-player game between the adversary and the algorithm; and characterizing adversarial risk by the minimum Bayes error between distributions belonging to the $\infty$-Wasserstein uncertainty sets. Our results generalize and deepen recently discovered connections between optimal transport and adversarial robustness and reveal new connections to Choquet capacities and game theory.
| null |
Meta-Adaptive Nonlinear Control: Theory and Algorithms
|
https://papers.nips.cc/paper_files/paper/2021/hash/52fc2aee802efbad698503d28ebd3a1f-Abstract.html
|
Guanya Shi, Kamyar Azizzadenesheli, Michael O'Connell, Soon-Jo Chung, Yisong Yue
|
https://papers.nips.cc/paper_files/paper/2021/hash/52fc2aee802efbad698503d28ebd3a1f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12389-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/52fc2aee802efbad698503d28ebd3a1f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sGUet7yVVgn
| null |
We present an online multi-task learning approach for adaptive nonlinear control, which we call Online Meta-Adaptive Control (OMAC). The goal is to control a nonlinear system subject to adversarial disturbance and unknown \emph{environment-dependent} nonlinear dynamics, under the assumption that the environment-dependent dynamics can be well captured with some shared representation. Our approach is motivated by robot control, where a robotic system encounters a sequence of new environmental conditions that it must quickly adapt to. A key emphasis is to integrate online representation learning with established methods from control theory, in order to arrive at a unified framework that yields both control-theoretic and learning-theoretic guarantees. We provide instantiations of our approach under varying conditions, leading to the first non-asymptotic end-to-end convergence guarantee for multi-task nonlinear control. OMAC can also be integrated with deep representation learning. Experiments show that OMAC significantly outperforms conventional adaptive control approaches which do not learn the shared representation, in inverted pendulum and 6-DoF drone control tasks under varying wind conditions.
| null |
Compositional Reinforcement Learning from Logical Specifications
|
https://papers.nips.cc/paper_files/paper/2021/hash/531db99cb00833bcd414459069dc7387-Abstract.html
|
Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, Rajeev Alur
|
https://papers.nips.cc/paper_files/paper/2021/hash/531db99cb00833bcd414459069dc7387-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12390-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/531db99cb00833bcd414459069dc7387-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ion6Lo5tKtJ
|
https://papers.nips.cc/paper_files/paper/2021/file/531db99cb00833bcd414459069dc7387-Supplemental.pdf
|
We study the problem of learning control policies for complex tasks given by logical specifications. Recent approaches automatically generate a reward function from a given specification and use a suitable reinforcement learning algorithm to learn a policy that maximizes the expected reward. These approaches, however, scale poorly to complex tasks that require high-level planning. In this work, we develop a compositional learning approach, called DIRL, that interleaves high-level planning and reinforcement learning. First, DIRL encodes the specification as an abstract graph; intuitively, vertices and edges of the graph correspond to regions of the state space and simpler sub-tasks, respectively. Our approach then incorporates reinforcement learning to learn neural network policies for each edge (sub-task) within a Dijkstra-style planning algorithm to compute a high-level plan in the graph. An evaluation of the proposed approach on a set of challenging control benchmarks with continuous state and action spaces demonstrates that it outperforms state-of-the-art baselines.
| null |
Differentiable Quality Diversity
|
https://papers.nips.cc/paper_files/paper/2021/hash/532923f11ac97d3e7cb0130315b067dc-Abstract.html
|
Matthew Fontaine, Stefanos Nikolaidis
|
https://papers.nips.cc/paper_files/paper/2021/hash/532923f11ac97d3e7cb0130315b067dc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12391-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/532923f11ac97d3e7cb0130315b067dc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=uJGObgFU0lU
|
https://papers.nips.cc/paper_files/paper/2021/file/532923f11ac97d3e7cb0130315b067dc-Supplemental.zip
|
Quality diversity (QD) is a growing branch of stochastic optimization research that studies the problem of generating an archive of solutions that maximize a given objective function but are also diverse with respect to a set of specified measure functions. However, even when these functions are differentiable, QD algorithms treat them as "black boxes", ignoring gradient information. We present the differentiable quality diversity (DQD) problem, a special case of QD, where both the objective and measure functions are first order differentiable. We then present MAP-Elites via a Gradient Arborescence (MEGA), a DQD algorithm that leverages gradient information to efficiently explore the joint range of the objective and measure functions. Results in two QD benchmark domains and in searching the latent space of a StyleGAN show that MEGA significantly outperforms state-of-the-art QD algorithms, highlighting DQD's promise for efficient quality diversity optimization when gradient information is available. Source code is available at https://github.com/icaros-usc/dqd.
| null |
Credit Assignment Through Broadcasting a Global Error Vector
|
https://papers.nips.cc/paper_files/paper/2021/hash/532b81fa223a1b1ec74139a5b8151d12-Abstract.html
|
David Clark, L F Abbott, Sueyeon Chung
|
https://papers.nips.cc/paper_files/paper/2021/hash/532b81fa223a1b1ec74139a5b8151d12-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12392-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/532b81fa223a1b1ec74139a5b8151d12-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2JwLvfKR8AI
|
https://papers.nips.cc/paper_files/paper/2021/file/532b81fa223a1b1ec74139a5b8151d12-Supplemental.pdf
|
Backpropagation (BP) uses detailed, unit-specific feedback to train deep neural networks (DNNs) with remarkable success. That biological neural circuits appear to perform credit assignment, but cannot implement BP, implies the existence of other powerful learning algorithms. Here, we explore the extent to which a globally broadcast learning signal, coupled with local weight updates, enables training of DNNs. We present both a learning rule, called global error-vector broadcasting (GEVB), and a class of DNNs, called vectorized nonnegative networks (VNNs), in which this learning rule operates. VNNs have vector-valued units and nonnegative weights past the first layer. The GEVB learning rule generalizes three-factor Hebbian learning, updating each weight by an amount proportional to the inner product of the presynaptic activation and a globally broadcast error vector when the postsynaptic unit is active. We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment. Moreover, at initialization, these updates are exactly proportional to the gradient in the limit of infinite network width. GEVB matches the performance of BP in VNNs, and in some cases outperforms direct feedback alignment (DFA) applied in conventional networks. Unlike DFA, GEVB successfully trains convolutional layers. Altogether, our theoretical and empirical results point to a surprisingly powerful role for a global learning signal in training DNNs.
| null |
An Online Method for A Class of Distributionally Robust Optimization with Non-convex Objectives
|
https://papers.nips.cc/paper_files/paper/2021/hash/533fa796b43291fc61a9e812a50c3fb6-Abstract.html
|
Qi Qi, Zhishuai Guo, Yi Xu, Rong Jin, Tianbao Yang
|
https://papers.nips.cc/paper_files/paper/2021/hash/533fa796b43291fc61a9e812a50c3fb6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12393-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/533fa796b43291fc61a9e812a50c3fb6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VlQNa6n479n
|
https://papers.nips.cc/paper_files/paper/2021/file/533fa796b43291fc61a9e812a50c3fb6-Supplemental.pdf
|
In this paper, we propose a practical online method for solving a class of distributional robust optimization (DRO) with non-convex objectives, which has important applications in machine learning for improving the robustness of neural networks. In the literature, most methods for solving DRO are based on stochastic primal-dual methods. However, primal-dual methods for DRO suffer from several drawbacks: (1) manipulating a high-dimensional dual variable corresponding to the size of data is time expensive; (2) they are not friendly to online learning where data is coming sequentially. To address these issues, we consider a class of DRO with an KL divergence regularization on the dual variables, transform the min-max problem into a compositional minimization problem, and propose practical duality-free online stochastic methods without requiring a large mini-batch size. We establish the state-of-the-art complexities of the proposed methods with and without a Polyak-Łojasiewicz (PL) condition of the objective. Empirical studies on large-scale deep learning tasks (i) demonstrate that our method can speed up the training by more than 2 times than baseline methods and save days of training time on a large-scale dataset with ∼ 265K images, and (ii) verify the supreme performance of DRO over Empirical Risk Minimization (ERM) on imbalanced datasets. Of independent interest, the proposed method can be also used for solving a family of stochastic compositional problems with state-of-the-art complexities.
| null |
A single gradient step finds adversarial examples on random two-layers neural networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/536eecee295b92db6b32194e269541f8-Abstract.html
|
Sebastien Bubeck, Yeshwanth Cherapanamjeri, Gauthier Gidel, Remi Tachet des Combes
|
https://papers.nips.cc/paper_files/paper/2021/hash/536eecee295b92db6b32194e269541f8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12394-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/536eecee295b92db6b32194e269541f8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NE0YlkgRo9x
|
https://papers.nips.cc/paper_files/paper/2021/file/536eecee295b92db6b32194e269541f8-Supplemental.pdf
|
Daniely and Schacham recently showed that gradient descent finds adversarial examples on random undercomplete two-layers ReLU neural networks. The term “undercomplete” refers to the fact that their proof only holds when the number of neurons is a vanishing fraction of the ambient dimension. We extend their result to the overcomplete case, where the number of neurons is larger than the dimension (yet also subexponential in the dimension). In fact we prove that a single step of gradient descent suffices. We also show this result for any subexponential width random neural network with smooth activation function.
| null |
Parameterized Knowledge Transfer for Personalized Federated Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5383c7318a3158b9bc261d0b6996f7c2-Abstract.html
|
Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wenchao Xu, Feijie Wu
|
https://papers.nips.cc/paper_files/paper/2021/hash/5383c7318a3158b9bc261d0b6996f7c2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12395-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5383c7318a3158b9bc261d0b6996f7c2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_89s8ViNwwj
|
https://papers.nips.cc/paper_files/paper/2021/file/5383c7318a3158b9bc261d0b6996f7c2-Supplemental.pdf
|
In recent years, personalized federated learning (pFL) has attracted increasing attention for its potential in dealing with statistical heterogeneity among clients. However, the state-of-the-art pFL methods rely on model parameters aggregation at the server side, which require all models to have the same structure and size, and thus limits the application for more heterogeneous scenarios. To deal with such model constraints, we exploit the potentials of heterogeneous model settings and propose a novel training framework to employ personalized models for different clients. Specifically, we formulate the aggregation procedure in original pFL into a personalized group knowledge transfer training algorithm, namely, KT-pFL, which enables each client to maintain a personalized soft prediction at the server side to guide the others' local training. KT-pFL updates the personalized soft prediction of each client by a linear combination of all local soft predictions using a knowledge coefficient matrix, which can adaptively reinforce the collaboration among clients who own similar data distribution. Furthermore, to quantify the contributions of each client to others' personalized training, the knowledge coefficient matrix is parameterized so that it can be trained simultaneously with the models. The knowledge coefficient matrix and the model parameters are alternatively updated in each round following the gradient descent way. Extensive experiments on various datasets (EMNIST, Fashion_MNIST, CIFAR-10) are conducted under different settings (heterogeneous models and data distributions). It is demonstrated that the proposed framework is the first federated learning paradigm that realizes personalized model training via parameterized group knowledge transfer while achieving significant performance gain comparing with state-of-the-art algorithms.
| null |
Contrastively Disentangled Sequential Variational Autoencoder
|
https://papers.nips.cc/paper_files/paper/2021/hash/53c5b2affa12eed84dfec9bfd83550b1-Abstract.html
|
Junwen Bai, Weiran Wang, Carla P. Gomes
|
https://papers.nips.cc/paper_files/paper/2021/hash/53c5b2affa12eed84dfec9bfd83550b1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12396-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/53c5b2affa12eed84dfec9bfd83550b1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rWPxhfz2_S
|
https://papers.nips.cc/paper_files/paper/2021/file/53c5b2affa12eed84dfec9bfd83550b1-Supplemental.pdf
|
Self-supervised disentangled representation learning is a critical task in sequence modeling. The learnt representations contribute to better model interpretability as well as the data generation, and improve the sample efficiency for downstream tasks. We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE), to extract and separate the static (time-invariant) and dynamic (time-variant) factors in the latent space. Different from previous sequential variational autoencoder methods, we use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors. We leverage contrastive estimations of the mutual information terms in training, together with simple yet effective augmentation techniques, to introduce additional inductive biases. Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics.
| null |
Recursive Causal Structure Learning in the Presence of Latent Variables and Selection Bias
|
https://papers.nips.cc/paper_files/paper/2021/hash/53edebc543333dfbf7c5933af792c9c4-Abstract.html
|
Sina Akbari, Ehsan Mokhtarian, AmirEmad Ghassami, Negar Kiyavash
|
https://papers.nips.cc/paper_files/paper/2021/hash/53edebc543333dfbf7c5933af792c9c4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12397-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/53edebc543333dfbf7c5933af792c9c4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=eElERAwRbo
|
https://papers.nips.cc/paper_files/paper/2021/file/53edebc543333dfbf7c5933af792c9c4-Supplemental.pdf
|
We consider the problem of learning the causal MAG of a system from observational data in the presence of latent variables and selection bias. Constraint-based methods are one of the main approaches for solving this problem, but the existing methods are either computationally impractical when dealing with large graphs or lacking completeness guarantees. We propose a novel computationally efficient recursive constraint-based method that is sound and complete. The key idea of our approach is that at each iteration a specific type of variable is identified and removed. This allows us to learn the structure efficiently and recursively, as this technique reduces both the number of required conditional independence (CI) tests and the size of the conditioning sets. The former substantially reduces the computational complexity, while the latter results in more reliable CI tests. We provide an upper bound on the number of required CI tests in the worst case. To the best of our knowledge, this is the tightest bound in the literature. We further provide a lower bound on the number of CI tests required by any constraint-based method. The upper bound of our proposed approach and the lower bound at most differ by a factor equal to the number of variables in the worst case. We provide experimental results to compare the proposed approach with the state of the art on both synthetic and real-world structures.
| null |
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime
|
https://papers.nips.cc/paper_files/paper/2021/hash/543bec10c8325987595fcdc492a525f4-Abstract.html
|
Hugo Cui, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
|
https://papers.nips.cc/paper_files/paper/2021/hash/543bec10c8325987595fcdc492a525f4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12398-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/543bec10c8325987595fcdc492a525f4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Da_EHrAcfwd
|
https://papers.nips.cc/paper_files/paper/2021/file/543bec10c8325987595fcdc492a525f4-Supplemental.pdf
|
In this manuscript we consider Kernel Ridge Regression (KRR) under the Gaussian design. Exponents for the decay of the excess generalization error of KRR have been reported in various works under the assumption of power-law decay of eigenvalues of the features co-variance. These decays were, however, provided for sizeably different setups, namely in the noiseless case with constant regularization and in the noisy optimally regularized case. Intermediary settings have been left substantially uncharted. In this work, we unify and extend this line of work, providing characterization of all regimes and excess error decay rates that can be observed in terms of the interplay of noise and regularization. In particular, we show the existence of a transition in the noisy setting between the noiseless exponents to its noisy values as the sample complexity is increased. Finally, we illustrate how this crossover can also be observed on real data sets.
| null |
Learning Gaussian Mixtures with Generalized Linear Models: Precise Asymptotics in High-dimensions
|
https://papers.nips.cc/paper_files/paper/2021/hash/543e83748234f7cbab21aa0ade66565f-Abstract.html
|
Bruno Loureiro, Gabriele Sicuro, Cedric Gerbelot, Alessandro Pacco, Florent Krzakala, Lenka Zdeborová
|
https://papers.nips.cc/paper_files/paper/2021/hash/543e83748234f7cbab21aa0ade66565f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12399-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/543e83748234f7cbab21aa0ade66565f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=j3eGyNMPvh
|
https://papers.nips.cc/paper_files/paper/2021/file/543e83748234f7cbab21aa0ade66565f-Supplemental.pdf
|
Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks. In this manuscript, we characterise the learning of a mixture of $K$ Gaussians with generic means and covariances via empirical risk minimisation (ERM) with any convex loss and regularisation. In particular, we prove exact asymptotics characterising the ERM estimator in high-dimensions, extending several previous results about Gaussian mixture classification in the literature. We exemplify our result in two tasks of interest in statistical learning: a) classification for a mixture with sparse means, where we study the efficiency of $\ell_1$ penalty with respect to $\ell_2$; b) max-margin multi-class classification, where we characterise the phase transition on the existence of the multi-class logistic maximum likelihood estimator for $K>2$. Finally, we discuss how our theory can be applied beyond the scope of synthetic data, showing that in different cases Gaussian mixtures capture closely the learning curve of classification tasks in real data sets.
| null |
Spectral embedding for dynamic networks with stability guarantees
|
https://papers.nips.cc/paper_files/paper/2021/hash/5446f217e9504bc593ad9dcf2ec88dda-Abstract.html
|
Ian Gallagher, Andrew Jones, Patrick Rubin-Delanchy
|
https://papers.nips.cc/paper_files/paper/2021/hash/5446f217e9504bc593ad9dcf2ec88dda-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12400-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5446f217e9504bc593ad9dcf2ec88dda-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9-ArDPYbUZG
|
https://papers.nips.cc/paper_files/paper/2021/file/5446f217e9504bc593ad9dcf2ec88dda-Supplemental.pdf
|
We consider the problem of embedding a dynamic network, to obtain time-evolving vector representations of each node, which can then be used to describe changes in behaviour of individual nodes, communities, or the entire graph. Given this open-ended remit, we argue that two types of stability in the spatio-temporal positioning of nodes are desirable: to assign the same position, up to noise, to nodes behaving similarly at a given time (cross-sectional stability) and a constant position, up to noise, to a single node behaving similarly across different times (longitudinal stability). Similarity in behaviour is defined formally using notions of exchangeability under a dynamic latent position network model. By showing how this model can be recast as a multilayer random dot product graph, we demonstrate that unfolded adjacency spectral embedding satisfies both stability conditions. We also show how two alternative methods, omnibus and independent spectral embedding, alternately lack one or the other form of stability.
| null |
Infinite Time Horizon Safety of Bayesian Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/544defa9fddff50c53b71c43e0da72be-Abstract.html
|
Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas Henzinger
|
https://papers.nips.cc/paper_files/paper/2021/hash/544defa9fddff50c53b71c43e0da72be-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12401-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/544defa9fddff50c53b71c43e0da72be-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=mvcIGGWXPOV
|
https://papers.nips.cc/paper_files/paper/2021/file/544defa9fddff50c53b71c43e0da72be-Supplemental.pdf
|
Bayesian neural networks (BNNs) place distributions over the weights of a neural network to model uncertainty in the data and the network's prediction.We consider the problem of verifying safety when running a Bayesian neural network policy in a feedback loop with infinite time horizon systems.Compared to the existing sampling-based approaches, which are inapplicable to the infinite time horizon setting, we train a separate deterministic neural network that serves as an infinite time horizon safety certificate.In particular, we show that the certificate network guarantees the safety of the system over a subset of the BNN weight posterior's support. Our method first computes a safe weight set and then alters the BNN's weight posterior to reject samples outside this set. Moreover, we show how to extend our approach to a safe-exploration reinforcement learning setting, in order to avoid unsafe trajectories during the training of the policy. We evaluate our approach on a series of reinforcement learning benchmarks, including non-Lyapunovian safety specifications.
| null |
Towards understanding retrosynthesis by energy-based models
|
https://papers.nips.cc/paper_files/paper/2021/hash/5470abe68052c72afb19be45bb418d02-Abstract.html
|
Ruoxi Sun, Hanjun Dai, Li Li, Steven Kearnes, Bo Dai
|
https://papers.nips.cc/paper_files/paper/2021/hash/5470abe68052c72afb19be45bb418d02-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12402-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5470abe68052c72afb19be45bb418d02-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yGKi6deX8bX
|
https://papers.nips.cc/paper_files/paper/2021/file/5470abe68052c72afb19be45bb418d02-Supplemental.pdf
|
Retrosynthesis is the process of identifying a set of reactants to synthesize a target molecule. It is of vital importance to material design and drug discovery. Existing machine learning approaches based on language models and graph neural networks have achieved encouraging results. However, the inner connections of these models are rarely discussed, and rigorous evaluations of these models are largely in need. In this paper, we propose a framework that unifies sequence- and graph-based methods as energy-based models (EBMs) with different energy functions. This unified view establishes connections and reveals the differences between models, thereby enhancing our understanding of model design. We also provide a comprehensive assessment of performance to the community. Moreover, we present a novel dual variant within the framework that performs consistent training to induce the agreement between forward- and backward-prediction. This model improves the state-of-the-art of template-free methods with or without reaction types.
| null |
List-Decodable Mean Estimation in Nearly-PCA Time
|
https://papers.nips.cc/paper_files/paper/2021/hash/547b85f3fafdf30856386753dc21c4e1-Abstract.html
|
Ilias Diakonikolas, Daniel Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian
|
https://papers.nips.cc/paper_files/paper/2021/hash/547b85f3fafdf30856386753dc21c4e1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12403-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/547b85f3fafdf30856386753dc21c4e1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=qb0qTdxPWzY
|
https://papers.nips.cc/paper_files/paper/2021/file/547b85f3fafdf30856386753dc21c4e1-Supplemental.pdf
|
Robust statistics has traditionally focused on designing estimators tolerant to a minority of contaminated data. {\em List-decodable learning}~\cite{CharikarSV17} studies the more challenging regime where only a minority $\tfrac 1 k$ fraction of the dataset, $k \geq 2$, is drawn from the distribution of interest, and no assumptions are made on the remaining data. We study the fundamental task of list-decodable mean estimation in high dimensions. Our main result is a new algorithm for bounded covariance distributions with optimal sample complexity and near-optimal error guarantee, running in {\em nearly-PCA time}. Assuming the ground truth distribution on $\mathbb{R}^d$ has identity-bounded covariance, our algorithm outputs $O(k)$ candidate means, one of which is within distance $O(\sqrt{k\log k})$ from the truth. Our algorithm runs in time $\widetilde{O}(ndk)$, where $n$ is the dataset size. This runtime nearly matches the cost of performing $k$-PCA on the data, a natural bottleneck of known algorithms for (very) special cases of our problem, such as clustering well-separated mixtures. Prior to our work, the fastest runtimes were $\widetilde{O}(n^2 d k^2)$~\cite{DiakonikolasKK20}, and $\widetilde{O}(nd k^C)$ \cite{CherapanamjeriMY20} for an unspecified constant $C \geq 6$. Our approach builds on a novel soft downweighting method we term SIFT, arguably the simplest known polynomial-time mean estimator in the list-decodable setting. To develop our fast algorithms, we boost the computational cost of SIFT via a careful ``win-win-win'' analysis of an approximate Ky Fan matrix multiplicative weights procedure we develop, which may be of independent interest.
| null |
Distributed Zero-Order Optimization under Adversarial Noise
|
https://papers.nips.cc/paper_files/paper/2021/hash/5487e79fa0ccd0b79e5d4a4c8ced005d-Abstract.html
|
Arya Akhavan, Massimiliano Pontil, Alexandre Tsybakov
|
https://papers.nips.cc/paper_files/paper/2021/hash/5487e79fa0ccd0b79e5d4a4c8ced005d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12404-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5487e79fa0ccd0b79e5d4a4c8ced005d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=S9dwZk0EXB
|
https://papers.nips.cc/paper_files/paper/2021/file/5487e79fa0ccd0b79e5d4a4c8ced005d-Supplemental.pdf
|
We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter, and smoothness properties of the local objectives. The bounds indicate some key improvements of our method over the state-of-the-art, both in the distributed and standard zero-order optimization settings.
| null |
Reliable Estimation of KL Divergence using a Discriminator in Reproducing Kernel Hilbert Space
|
https://papers.nips.cc/paper_files/paper/2021/hash/54a367d629152b720749e187b3eaa11b-Abstract.html
|
Sandesh Ghimire, Aria Masoomi, Jennifer Dy
|
https://papers.nips.cc/paper_files/paper/2021/hash/54a367d629152b720749e187b3eaa11b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12405-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/54a367d629152b720749e187b3eaa11b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xB4lGVLvXDz
|
https://papers.nips.cc/paper_files/paper/2021/file/54a367d629152b720749e187b3eaa11b-Supplemental.pdf
|
Estimating Kullback–Leibler (KL) divergence from samples of two distributions is essential in many machine learning problems. Variational methods using neural network discriminator have been proposed to achieve this task in a scalable manner. However, we noticed that most of these methods using neural network discriminators suffer from high fluctuations (variance) in estimates and instability in training. In this paper, we look at this issue from statistical learning theory and function space complexity perspective to understand why this happens and how to solve it. We argue that the cause of these pathologies is lack of control over the complexity of the neural network discriminator function and could be mitigated by controlling it. To achieve this objective, we 1) present a novel construction of the discriminator in the Reproducing Kernel Hilbert Space (RKHS), 2) theoretically relate the error probability bound of the KL estimates to the complexity of the discriminator in the RKHS space, 3) present a scalable way to control the complexity (RKHS norm) of the discriminator for a reliable estimation of KL divergence, and 4) prove the consistency of the proposed estimator. In three different applications of KL divergence -- estimation of KL, estimation of mutual information and Variational Bayes -- we show that by controlling the complexity as developed in the theory, we are able to reduce the variance of KL estimates and stabilize the training.
| null |
Latent Matters: Learning Deep State-Space Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/54b2b21af94108d83c2a909d5b0a6a50-Abstract.html
|
Alexej Klushyn, Richard Kurle, Maximilian Soelch, Botond Cseke, Patrick van der Smagt
|
https://papers.nips.cc/paper_files/paper/2021/hash/54b2b21af94108d83c2a909d5b0a6a50-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12406-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/54b2b21af94108d83c2a909d5b0a6a50-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-WEryOMRpZU
|
https://papers.nips.cc/paper_files/paper/2021/file/54b2b21af94108d83c2a909d5b0a6a50-Supplemental.pdf
|
Deep state-space models (DSSMs) enable temporal predictions by learning the underlying dynamics of observed sequence data. They are often trained by maximising the evidence lower bound. However, as we show, this does not ensure the model actually learns the underlying dynamics. We therefore propose a constrained optimisation framework as a general approach for training DSSMs. Building upon this, we introduce the extended Kalman VAE (EKVAE), which combines amortised variational inference with classic Bayesian filtering/smoothing to model dynamics more accurately than RNN-based DSSMs. Our results show that the constrained optimisation framework significantly improves system identification and prediction accuracy on the example of established state-of-the-art DSSMs. The EKVAE outperforms previous models w.r.t. prediction accuracy, achieves remarkable results in identifying dynamical systems, and can furthermore successfully learn state-space representations where static and dynamic features are disentangled.
| null |
On the Estimation Bias in Double Q-Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/54e8912427a8d007ece906c577fdca60-Abstract.html
|
Zhizhou Ren, Guangxiang Zhu, Hao Hu, Beining Han, Jianglun Chen, Chongjie Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/54e8912427a8d007ece906c577fdca60-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12407-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/54e8912427a8d007ece906c577fdca60-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=JV0lxbO1W7m
|
https://papers.nips.cc/paper_files/paper/2021/file/54e8912427a8d007ece906c577fdca60-Supplemental.pdf
|
Double Q-learning is a classical method for reducing overestimation bias, which is caused by taking maximum estimated values in the Bellman operation. Its variants in the deep Q-learning paradigm have shown great promise in producing reliable value prediction and improving learning performance. However, as shown by prior work, double Q-learning is not fully unbiased and suffers from underestimation bias. In this paper, we show that such underestimation bias may lead to multiple non-optimal fixed points under an approximate Bellman operator. To address the concerns of converging to non-optimal stationary solutions, we propose a simple but effective approach as a partial fix for the underestimation bias in double Q-learning. This approach leverages an approximate dynamic programming to bound the target value. We extensively evaluate our proposed method in the Atari benchmark tasks and demonstrate its significant improvement over baseline algorithms.
| null |
Mitigating Forgetting in Online Continual Learning with Neuron Calibration
|
https://papers.nips.cc/paper_files/paper/2021/hash/54ee290e80589a2a1225c338a71839f5-Abstract.html
|
Haiyan Yin, peng yang, Ping Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/54ee290e80589a2a1225c338a71839f5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12408-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/54ee290e80589a2a1225c338a71839f5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=FLHl3Wg4sv
|
https://papers.nips.cc/paper_files/paper/2021/file/54ee290e80589a2a1225c338a71839f5-Supplemental.pdf
|
Inspired by human intelligence, the research on online continual learning aims to push the limits of the machine learning models to constantly learn from sequentially encountered tasks, with the data from each task being observed in an online fashion. Though recent studies have achieved remarkable progress in improving the online continual learning performance empowered by the deep neural networks-based models, many of today's approaches still suffer a lot from catastrophic forgetting, a persistent challenge for continual learning. In this paper, we present a novel method which attempts to mitigate catastrophic forgetting in online continual learning from a new perspective, i.e., neuron calibration. In particular, we model the neurons in the deep neural networks-based models as calibrated units under a general formulation. Then we formalize a learning framework to effectively train the calibrated model, where neuron calibration could give ubiquitous benefit to balance the stability and plasticity of online continual learning algorithms through influencing both their forward inference path and backward optimization path. Our proposed formulation for neuron calibration is lightweight and applicable to general feed-forward neural networks-based models. We perform extensive experiments to evaluate our method on four benchmark continual learning datasets. The results show that neuron calibration plays a vital role in improving online continual learning performance and our method could substantially improve the state-of-the-art performance on all~the~evaluated~datasets.
| null |
Escaping Saddle Points with Compressed SGD
|
https://papers.nips.cc/paper_files/paper/2021/hash/54eea69746513c0b90bbe6227b6f46c3-Abstract.html
|
Dmitrii Avdiukhin, Grigory Yaroslavtsev
|
https://papers.nips.cc/paper_files/paper/2021/hash/54eea69746513c0b90bbe6227b6f46c3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12409-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/54eea69746513c0b90bbe6227b6f46c3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ke9lCi1vGF
|
https://papers.nips.cc/paper_files/paper/2021/file/54eea69746513c0b90bbe6227b6f46c3-Supplemental.pdf
|
Stochastic gradient descent (SGD) is a prevalent optimization technique for large-scale distributed machine learning. While SGD computation can be efficiently divided between multiple machines, communication typically becomes a bottleneck in the distributed setting. Gradient compression methods can be used to alleviate this problem, and a recent line of work shows that SGD augmented with gradient compression converges to an $\varepsilon$-first-order stationary point. In this paper we extend these results to convergence to an $\varepsilon$-second-order stationary point ($\varepsilon$-SOSP), which is to the best of our knowledge the first result of this type. In addition, we show that, when the stochastic gradient is not Lipschitz, compressed SGD with RandomK compressor converges to an $\varepsilon$-SOSP with the same number of iterations as uncompressed SGD [Jin et al.,2021] (JACM), while improving the total communication by a factor of $\tilde \Theta(\sqrt{d} \varepsilon^{-3/4})$, where $d$ is the dimension of the optimization problem. We present additional results for the cases when the compressor is arbitrary and when the stochastic gradient is Lipschitz.
| null |
Non-Gaussian Gaussian Processes for Few-Shot Regression
|
https://papers.nips.cc/paper_files/paper/2021/hash/54f3bc04830d762a3b56a789b6ff62df-Abstract.html
|
Marcin Sendera, Jacek Tabor, Aleksandra Nowak, Andrzej Bedychaj, Massimiliano Patacchiola, Tomasz Trzcinski, Przemysław Spurek, Maciej Zieba
|
https://papers.nips.cc/paper_files/paper/2021/hash/54f3bc04830d762a3b56a789b6ff62df-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12410-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/54f3bc04830d762a3b56a789b6ff62df-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pX7gwTNljqa
|
https://papers.nips.cc/paper_files/paper/2021/file/54f3bc04830d762a3b56a789b6ff62df-Supplemental.pdf
|
Gaussian Processes (GPs) have been widely used in machine learning to model distributions over functions, with applications including multi-modal regression, time-series prediction, and few-shot learning. GPs are particularly useful in the last application since they rely on Normal distributions and enable closed-form computation of the posterior probability function. Unfortunately, because the resulting posterior is not flexible enough to capture complex distributions, GPs assume high similarity between subsequent tasks - a requirement rarely met in real-world conditions. In this work, we address this limitation by leveraging the flexibility of Normalizing Flows to modulate the posterior predictive distribution of the GP. This makes the GP posterior locally non-Gaussian, therefore we name our method Non-Gaussian Gaussian Processes (NGGPs). More precisely, we propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them. We empirically tested the flexibility of NGGPs on various few-shot learning regression datasets, showing that the mapping can incorporate context embedding information to model different noise levels for periodic functions. As a result, our method shares the structure of the problem between subsequent tasks, but the contextualization allows for adaptation to dissimilarities. NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
| null |
Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/550a141f12de6341fba65b0ad0433500-Abstract.html
|
Yiqin Yang, Xiaoteng Ma, Chenghao Li, Zewu Zheng, Qiyuan Zhang, Gao Huang, Jun Yang, Qianchuan Zhao
|
https://papers.nips.cc/paper_files/paper/2021/hash/550a141f12de6341fba65b0ad0433500-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12411-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/550a141f12de6341fba65b0ad0433500-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6tM849_6RF9
| null |
Learning from datasets without interaction with environments (Offline Learning) is an essential step to apply Reinforcement Learning (RL) algorithms in real-world scenarios.However, compared with the single-agent counterpart, offline multi-agent RL introduces more agents with the larger state and action space, which is more challenging but attracts little attention. We demonstrate current offline RL algorithms are ineffective in multi-agent systems due to the accumulated extrapolation error. In this paper, we propose a novel offline RL algorithm, named Implicit Constraint Q-learning (ICQ), which effectively alleviates the extrapolation error by only trusting the state-action pairs given in the dataset for value estimation. Moreover, we extend ICQ to multi-agent tasks by decomposing the joint-policy under the implicit constraint. Experimental results demonstrate that the extrapolation error is successfully controlled within a reasonable range and insensitive to the number of agents. We further show that ICQ achieves the state-of-the-art performance in the challenging multi-agent offline tasks (StarCraft II). Our code is public online at https://github.com/YiqinYang/ICQ.
| null |
Online Learning in Periodic Zero-Sum Games
|
https://papers.nips.cc/paper_files/paper/2021/hash/55563844bcd4bba067fe86ac1f008c7e-Abstract.html
|
Tanner Fiez, Ryann Sim, Stratis Skoulakis, Georgios Piliouras, Lillian Ratliff
|
https://papers.nips.cc/paper_files/paper/2021/hash/55563844bcd4bba067fe86ac1f008c7e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12412-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/55563844bcd4bba067fe86ac1f008c7e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ePgNDwRKJV
|
https://papers.nips.cc/paper_files/paper/2021/file/55563844bcd4bba067fe86ac1f008c7e-Supplemental.pdf
|
A seminal result in game theory is von Neumann's minmax theorem, which states that zero-sum games admit an essentially unique equilibrium solution. Classical learning results build on this theorem to show that online no-regret dynamics converge to an equilibrium in a time-average sense in zero-sum games. In the past several years, a key research direction has focused on characterizing the transient behavior of such dynamics. General results in this direction show that broad classes of online learning dynamics are cyclic, and formally Poincar\'{e} recurrent, in zero-sum games. We analyze the robustness of these online learning behaviors in the case of periodic zero-sum games with a time-invariant equilibrium. This model generalizes the usual repeated game formulation while also being a realistic and natural model of a repeated competition between players that depends on exogenous environmental variations such as time-of-day effects, week-to-week trends, and seasonality. Interestingly, time-average convergence may fail even in the simplest such settings, in spite of the equilibrium being fixed. In contrast, using novel analysis methods, we show that Poincar\'{e} recurrence provably generalizes despite the complex, non-autonomous nature of these dynamical systems.
| null |
K-Net: Towards Unified Image Segmentation
|
https://papers.nips.cc/paper_files/paper/2021/hash/55a7cf9c71f1c9c495413f934dd1a158-Abstract.html
|
Wenwei Zhang, Jiangmiao Pang, Kai Chen, Chen Change Loy
|
https://papers.nips.cc/paper_files/paper/2021/hash/55a7cf9c71f1c9c495413f934dd1a158-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12413-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/55a7cf9c71f1c9c495413f934dd1a158-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=uDeDDoFOEpj
|
https://papers.nips.cc/paper_files/paper/2021/file/55a7cf9c71f1c9c495413f934dd1a158-Supplemental.pdf
|
Semantic, instance, and panoptic segmentations have been addressed using different and specialized frameworks despite their underlying connections. This paper presents a unified, simple, and effective framework for these essentially similar tasks. The framework, named K-Net, segments both instances and semantic categories consistently by a group of learnable kernels, where each kernel is responsible for generating a mask for either a potential instance or a stuff class. To remedy the difficulties of distinguishing various instances, we propose a kernel update strategy that enables each kernel dynamic and conditional on its meaningful group in the input image. K-Net can be trained in an end-to-end manner with bipartite matching, and its training and inference are naturally NMS-free and box-free. Without bells and whistles, K-Net surpasses all previous published state-of-the-art single-model results of panoptic segmentation on MS COCO test-dev split and semantic segmentation on ADE20K val split with 55.2% PQ and 54.3% mIoU, respectively. Its instance segmentation performance is also on par with Cascade Mask R-CNN on MS COCO with 60%-90% faster inference speeds. Code and models will be released at https://github.com/ZwwWayne/K-Net/.
| null |
Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/55a988dfb00a914717b3000a3374694c-Abstract.html
|
Bo Sun, Russell Lee, Mohammad Hajiesmaili, Adam Wierman, Danny Tsang
|
https://papers.nips.cc/paper_files/paper/2021/hash/55a988dfb00a914717b3000a3374694c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12414-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/55a988dfb00a914717b3000a3374694c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ycmcCSoNBx8
|
https://papers.nips.cc/paper_files/paper/2021/file/55a988dfb00a914717b3000a3374694c-Supplemental.pdf
|
This paper leverages machine-learned predictions to design competitive algorithms for online conversion problems with the goal of improving the competitive ratio when predictions are accurate (i.e., consistency), while also guaranteeing a worst-case competitive ratio regardless of the prediction quality (i.e., robustness). We unify the algorithmic design of both integral and fractional conversion problems, which are also known as the 1-max-search and one-way trading problems, into a class of online threshold-based algorithms (OTA). By incorporating predictions into design of OTA, we achieve the Pareto-optimal trade-off of consistency and robustness, i.e., no online algorithm can achieve a better consistency guarantee given for a robustness guarantee. We demonstrate the performance of OTA using numerical experiments on Bitcoin conversion.
| null |
Dynaboard: An Evaluation-As-A-Service Platform for Holistic Next-Generation Benchmarking
|
https://papers.nips.cc/paper_files/paper/2021/hash/55b1927fdafef39c48e5b73b5d61ea60-Abstract.html
|
Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher Potts, Adina Williams, Douwe Kiela
|
https://papers.nips.cc/paper_files/paper/2021/hash/55b1927fdafef39c48e5b73b5d61ea60-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12415-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/55b1927fdafef39c48e5b73b5d61ea60-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=TCarYAus7JL
|
https://papers.nips.cc/paper_files/paper/2021/file/55b1927fdafef39c48e5b73b5d61ea60-Supplemental.pdf
|
We introduce Dynaboard, an evaluation-as-a-service framework for hosting benchmarks and conducting holistic model comparison, integrated with the Dynabench platform. Our platform evaluates NLP models directly instead of relying on self-reported metrics or predictions on a single dataset. Under this paradigm, models are submitted to be evaluated in the cloud, circumventing the issues of reproducibility, accessibility, and backwards compatibility that often hinder benchmarking in NLP. This allows users to interact with uploaded models in real time to assess their quality, and permits the collection of additional metrics such as memory use, throughput, and robustness, which -- despite their importance to practitioners -- have traditionally been absent from leaderboards. On each task, models are ranked according to the Dynascore, a novel utility-based aggregation of these statistics, which users can customize to better reflect their preferences, placing more/less weight on a particular axis of evaluation or dataset. As state-of-the-art NLP models push the limits of traditional benchmarks, Dynaboard offers a standardized solution for a more diverse and comprehensive evaluation of model quality.
| null |
NTopo: Mesh-free Topology Optimization using Implicit Neural Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/55d99a37b2e1badba7c8df4ccd506a88-Abstract.html
|
Jonas Zehnder, Yue Li, Stelian Coros, Bernhard Thomaszewski
|
https://papers.nips.cc/paper_files/paper/2021/hash/55d99a37b2e1badba7c8df4ccd506a88-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12416-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/55d99a37b2e1badba7c8df4ccd506a88-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=bBHHU4dW88g
|
https://papers.nips.cc/paper_files/paper/2021/file/55d99a37b2e1badba7c8df4ccd506a88-Supplemental.zip
|
Recent advances in implicit neural representations show great promise when it comes to generating numerical solutions to partial differential equations. Compared to conventional alternatives, such representations employ parameterized neural networks to define, in a mesh-free manner, signals that are highly-detailed, continuous, and fully differentiable. In this work, we present a novel machine learning approach for topology optimization---an important class of inverse problems with high-dimensional parameter spaces and highly nonlinear objective landscapes. To effectively leverage neural representations in the context of mesh-free topology optimization, we use multilayer perceptrons to parameterize both density and displacement fields. Our experiments indicate that our method is highly competitive for minimizing structural compliance objectives, and it enables self-supervised learning of continuous solution spaces for topology optimization problems.
| null |
Generalization Bounds for (Wasserstein) Robust Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/55fd1368113e5a675e868c5653a7bb9e-Abstract.html
|
Yang An, Rui Gao
|
https://papers.nips.cc/paper_files/paper/2021/hash/55fd1368113e5a675e868c5653a7bb9e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12417-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/55fd1368113e5a675e868c5653a7bb9e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=70fOkZPtGqT
|
https://papers.nips.cc/paper_files/paper/2021/file/55fd1368113e5a675e868c5653a7bb9e-Supplemental.pdf
|
(Distributionally) robust optimization has gained momentum in machine learning community recently, due to its promising applications in developing generalizable learning paradigms. In this paper, we derive generalization bounds for robust optimization and Wasserstein robust optimization for Lipschitz and piecewise Hölder smooth loss functions under both stochastic and adversarial setting, assuming that the underlying data distribution satisfies transportation-information inequalities. The proofs are built on new generalization bounds for variation regularization (such as Lipschitz or gradient regularization) and its connection with robustness.
| null |
Faster Matchings via Learned Duals
|
https://papers.nips.cc/paper_files/paper/2021/hash/5616060fb8ae85d93f334e7267307664-Abstract.html
|
Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii
|
https://papers.nips.cc/paper_files/paper/2021/hash/5616060fb8ae85d93f334e7267307664-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12418-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5616060fb8ae85d93f334e7267307664-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kB8eks2Edt8
|
https://papers.nips.cc/paper_files/paper/2021/file/5616060fb8ae85d93f334e7267307664-Supplemental.pdf
|
A recent line of research investigates how algorithms can be augmented with machine-learned predictions to overcome worst case lower bounds. This area has revealed interesting algorithmic insights into problems, with particular success in the design of competitive online algorithms. However, the question of improving algorithm running times with predictions has largely been unexplored. We take a first step in this direction by combining the idea of machine-learned predictions with the idea of ``warm-starting" primal-dual algorithms. We consider one of the most important primitives in combinatorial optimization: weighted bipartite matching and its generalization to $b$-matching. We identify three key challenges when using learned dual variables in a primal-dual algorithm. First, predicted duals may be infeasible, so we give an algorithm that efficiently maps predicted infeasible duals to nearby feasible solutions. Second, once the duals are feasible, they may not be optimal, so we show that they can be used to quickly find an optimal solution. Finally, such predictions are useful only if they can be learned, so we show that the problem of learning duals for matching has low sample complexity. We validate our theoretical findings through experiments on both real and synthetic data. As a result we give a rigorous, practical, and empirically effective method to compute bipartite matchings.
| null |
Online learning in MDPs with linear function approximation and bandit feedback.
|
https://papers.nips.cc/paper_files/paper/2021/hash/5631e6ee59a4175cd06c305840562ff3-Abstract.html
|
Gergely Neu, Julia Olkhovskaya
|
https://papers.nips.cc/paper_files/paper/2021/hash/5631e6ee59a4175cd06c305840562ff3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12419-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5631e6ee59a4175cd06c305840562ff3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=gviX23L1bqw
|
https://papers.nips.cc/paper_files/paper/2021/file/5631e6ee59a4175cd06c305840562ff3-Supplemental.pdf
|
We consider the problem of online learning in an episodic Markov decision process, where the reward function is allowed to change between episodes in an adversarial manner and the learner only observes the rewards associated with its actions. We assume that rewards and the transition function can be represented as linear functions in terms of a known low-dimensional feature map, which allows us to consider the setting where the state space is arbitrarily large. We also assume that the learner has a perfect knowledge of the MDP dynamics. Our main contribution is developing an algorithm whose expected regret after $T$ episodes is bounded by $\widetilde{\mathcal{O}}(\sqrt{dHT})$, where $H$ is the number of steps in each episode and $d$ is the dimensionality of the feature map.
| null |
Learning Collaborative Policies to Solve NP-hard Routing Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/564127c03caab942e503ee6f810f54fd-Abstract.html
|
Minsu Kim, Jinkyoo Park, joungho kim
|
https://papers.nips.cc/paper_files/paper/2021/hash/564127c03caab942e503ee6f810f54fd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12420-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/564127c03caab942e503ee6f810f54fd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pDgN3l3EAF
|
https://papers.nips.cc/paper_files/paper/2021/file/564127c03caab942e503ee6f810f54fd-Supplemental.pdf
|
Recently, deep reinforcement learning (DRL) frameworks have shown potential for solving NP-hard routing problems such as the traveling salesman problem (TSP) without problem-specific expert knowledge. Although DRL can be used to solve complex problems, DRL frameworks still struggle to compete with state-of-the-art heuristics showing a substantial performance gap. This paper proposes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two iterative DRL policies: the seeder and reviser. The seeder generates as diversified candidate solutions as possible (seeds) while being dedicated to exploring over the full combinatorial action space (i.e., sequence of assignment action). To this end, we train the seeder's policy using a simple yet effective entropy regularization reward to encourage the seeder to find diverse solutions. On the other hand, the reviser modifies each candidate solution generated by the seeder; it partitions the full trajectory into sub-tours and simultaneously revises each sub-tour to minimize its traveling distance. Thus, the reviser is trained to improve the candidate solution's quality, focusing on the reduced solution space (which is beneficial for exploitation). Extensive experiments demonstrate that the proposed two-policies collaboration scheme improves over single-policy DRL framework on various NP-hard routing problems, including TSP, prize collecting TSP (PCTSP), and capacitated vehicle routing problem (CVRP).
| null |
Efficient Mirror Descent Ascent Methods for Nonsmooth Minimax Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/56503192b14190d3826780d47c0d3bf3-Abstract.html
|
Feihu Huang, Xidong Wu, Heng Huang
|
https://papers.nips.cc/paper_files/paper/2021/hash/56503192b14190d3826780d47c0d3bf3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12421-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56503192b14190d3826780d47c0d3bf3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=3EuMT2Lqn4q
|
https://papers.nips.cc/paper_files/paper/2021/file/56503192b14190d3826780d47c0d3bf3-Supplemental.pdf
|
In the paper, we propose a class of efficient mirror descent ascent methods to solve the nonsmooth nonconvex-strongly-concave minimax problems by using dynamic mirror functions, and introduce a convergence analysis framework to conduct rigorous theoretical analysis for our mirror descent ascent methods. For our stochastic algorithms, we first prove that the mini-batch stochastic mirror descent ascent (SMDA) method obtains a gradient complexity of $O(\kappa^3\epsilon^{-4})$ for finding an $\epsilon$-stationary point, where $\kappa$ denotes the condition number. Further, we propose an accelerated stochastic mirror descent ascent (VR-SMDA) method based on the variance reduced technique. We prove that our VR-SMDA method achieves a lower gradient complexity of $O(\kappa^3\epsilon^{-3})$. For our deterministic algorithm, we prove that our deterministic mirror descent ascent (MDA) achieves a lower gradient complexity of $O(\sqrt{\kappa}\epsilon^{-2})$ under mild conditions, which matches the best known complexity in solving smooth nonconvex-strongly-concave minimax optimization. We conduct the experiments on fair classifier and robust neural network training tasks to demonstrate the efficiency of our new algorithms.
| null |
CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculum
|
https://papers.nips.cc/paper_files/paper/2021/hash/56577889b3c1cd083b6d7b32d32f99d5-Abstract.html
|
Shuang Ao, Tianyi Zhou, Guodong Long, Qinghua Lu, Liming Zhu, Jing Jiang
|
https://papers.nips.cc/paper_files/paper/2021/hash/56577889b3c1cd083b6d7b32d32f99d5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12422-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56577889b3c1cd083b6d7b32d32f99d5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=uz_2t6VZby
|
https://papers.nips.cc/paper_files/paper/2021/file/56577889b3c1cd083b6d7b32d32f99d5-Supplemental.pdf
|
Goal-conditioned reinforcement learning (RL) usually suffers from sparse reward and inefficient exploration in long-horizon tasks. Planning can find the shortest path to a distant goal that provides dense reward/guidance but is inaccurate without a precise environment model. We show that RL and planning can collaboratively learn from each other to overcome their own drawbacks. In ''CO-PILOT'', a learnable path-planner and an RL agent produce dense feedback to train each other on a curriculum of tree-structured sub-tasks. Firstly, the planner recursively decomposes a long-horizon task to a tree of sub-tasks in a top-down manner, whose layers construct coarse-to-fine sub-task sequences as plans to complete the original task. The planning policy is trained to minimize the RL agent's cost of completing the sequence in each layer from top to bottom layers, which gradually increases the sub-tasks and thus forms an easy-to-hard curriculum for the planner. Next, a bottom-up traversal of the tree trains the RL agent from easier sub-tasks with denser rewards on bottom layers to harder ones on top layers and collects its cost on each sub-task train the planner in the next episode. CO-PILOT repeats this mutual training for multiple episodes before switching to a new task, so the RL agent and planner are fully optimized to facilitate each other's training. We compare CO-PILOT with RL (SAC, HER, PPO), planning (RRT*, NEXT, SGT), and their combination (SoRB) on navigation and continuous control tasks. CO-PILOT significantly improves the success rate and sample efficiency.
| null |
Modality-Agnostic Topology Aware Localization
|
https://papers.nips.cc/paper_files/paper/2021/hash/569ff987c643b4bedf504efda8f786c2-Abstract.html
|
Farhad Ghazvinian Zanjani, Ilia Karmanov, Hanno Ackermann, Daniel Dijkman, Simone Merlin, Max Welling, Fatih Porikli
|
https://papers.nips.cc/paper_files/paper/2021/hash/569ff987c643b4bedf504efda8f786c2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12423-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/569ff987c643b4bedf504efda8f786c2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=3v6n7458GAq
|
https://papers.nips.cc/paper_files/paper/2021/file/569ff987c643b4bedf504efda8f786c2-Supplemental.pdf
|
This work presents a data-driven approach for the indoor localization of an observer on a 2D topological map of the environment. State-of-the-art techniques may yield accurate estimates only when they are tailor-made for a specific data modality like camera-based system that prevents their applicability to broader domains. Here, we establish a modality-agnostic framework (called OT-Isomap) and formulate the localization problem in the context of parametric manifold learning while leveraging optimal transportation. This framework allows jointly learning a low-dimensional embedding as well as correspondences with a topological map. We examine the generalizability of the proposed algorithm by applying it to data from diverse modalities such as image sequences and radio frequency signals. The experimental results demonstrate decimeter-level accuracy for localization using different sensory inputs.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.