title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation
https://papers.nips.cc/paper_files/paper/2021/hash/33a854e247155d590883b93bca53848a-Abstract.html
Jixuan Wang, Kuan-Chieh Wang, Frank Rudzicz, Michael Brudno
https://papers.nips.cc/paper_files/paper/2021/hash/33a854e247155d590883b93bca53848a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12124-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/33a854e247155d590883b93bca53848a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jScy7BjbZeQ
https://papers.nips.cc/paper_files/paper/2021/file/33a854e247155d590883b93bca53848a-Supplemental.pdf
Large pretrained language models (LMs) like BERT have improved performance in many disparate natural language processing (NLP) tasks. However, fine tuning such models requires a large number of training examples for each target task. Simultaneously, many realistic NLP problems are "few shot", without a sufficiently large training set. In this work, we propose a novel conditional neural process-based approach for few-shot text classification that learns to transfer from other diverse tasks with rich annotation. Our key idea is to represent each task using gradient information from a base model and to train an adaptation network that modulates a text classifier conditioned on the task representation. While previous task-aware few-shot learners represent tasks by input encoding, our novel task representation is more powerful, as the gradient captures input-output relationships of a task. Experimental results show that our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches on a collection of diverse few-shot tasks. We further conducted analysis and ablations to justify our design choices.
null
Learnability of Linear Thresholds from Label Proportions
https://papers.nips.cc/paper_files/paper/2021/hash/33b3214d792caf311e1f00fd22b392c5-Abstract.html
Rishi Saket
https://papers.nips.cc/paper_files/paper/2021/hash/33b3214d792caf311e1f00fd22b392c5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12125-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/33b3214d792caf311e1f00fd22b392c5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5BnaKeEwuYk
https://papers.nips.cc/paper_files/paper/2021/file/33b3214d792caf311e1f00fd22b392c5-Supplemental.pdf
We study the problem of properly learning linear threshold functions (LTFs) in the learning from label proportions (LLP) framework. In this, the learning is on a collection of bags of feature-vectors with only the proportion of labels available for each bag. First, we provide an algorithm that, given a collection of such bags each of size at most two whose label proportions are consistent with (i.e., the bags are satisfied by) an unknown LTF, efficiently produces an LTF that satisfies at least $(2/5)$-fraction of the bags. If all the bags are non-monochromatic (i.e., bags of size two with differently labeled feature-vectors) the algorithm satisfies at least $(1/2)$-fraction of them. For the special case of OR over the $d$-dimensional boolean vectors, we give an algorithm which computes an LTF achieving an additional $\Omega(1/d)$ in accuracy for the two cases.Our main result provides evidence that these algorithmic bounds cannot be significantly improved, even for learning monotone ORs using LTFs. We prove that it is NP-hard, given a collection of non-monochromatic bags which are all satisfied by some monotone OR, to compute any function of constantly many LTFs that satisfies $(1/2 + \varepsilon)$-fraction of the bags for any constant $\varepsilon > 0$. This bound is tight for the non-monochromatic bags case.The above is in contrast to the usual supervised learning setup (i.e., unit-sized bags) in which LTFs are efficiently learnable to arbitrary accuracy using linear programming, and even a trivial algorithm (any LTF or its complement) achieves an accuracy of $1/2$. These techniques however, fail in the LLP setting. Indeed, we show that the LLP learning of LTFs (even for the special case of monotone ORs) using LTFs dramatically increases in complexity as soon as bags of size two are allowed.Our work gives the first inapproximability for LLP learning LTFs, and a strong complexity separation between LLP and traditional supervised learning.
null
A variational approximate posterior for the deep Wishart process
https://papers.nips.cc/paper_files/paper/2021/hash/33ceb07bf4eeb3da587e268d663aba1a-Abstract.html
Sebastian Ober, Laurence Aitchison
https://papers.nips.cc/paper_files/paper/2021/hash/33ceb07bf4eeb3da587e268d663aba1a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12126-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/33ceb07bf4eeb3da587e268d663aba1a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=y7l4h5xtaqQ
https://papers.nips.cc/paper_files/paper/2021/file/33ceb07bf4eeb3da587e268d663aba1a-Supplemental.pdf
Recent work introduced deep kernel processes as an entirely kernel-based alternative to NNs (Aitchison et al. 2020). Deep kernel processes flexibly learn good top-layer representations by alternately sampling the kernel from a distribution over positive semi-definite matrices and performing nonlinear transformations. A particular deep kernel process, the deep Wishart process (DWP), is of particular interest because its prior can be made equivalent to deep Gaussian process (DGP) priors for kernels that can be expressed entirely in terms of Gram matrices. However, inference in DWPs has not yet been possible due to the lack of sufficiently flexible distributions over positive semi-definite matrices. Here, we give a novel approach to obtaining flexible distributions over positive semi-definite matrices by generalising the Bartlett decomposition of the Wishart probability density. We use this new distribution to develop an approximate posterior for the DWP that includes dependency across layers. We develop a doubly-stochastic inducing-point inference scheme for the DWP and show experimentally that inference in the DWP can improve performance over doing inference in a DGP with the equivalent prior.
null
Neural Pseudo-Label Optimism for the Bank Loan Problem
https://papers.nips.cc/paper_files/paper/2021/hash/33d6548e48d4318ceb0e3916a79afc84-Abstract.html
Aldo Pacchiano, Shaun Singh, Edward Chou, Alex Berg, Jakob Foerster
https://papers.nips.cc/paper_files/paper/2021/hash/33d6548e48d4318ceb0e3916a79afc84-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12127-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/33d6548e48d4318ceb0e3916a79afc84-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jYzSTzvDP3p
https://papers.nips.cc/paper_files/paper/2021/file/33d6548e48d4318ceb0e3916a79afc84-Supplemental.pdf
We study a class of classification problems best exemplified by the \emph{bank loan} problem, where a lender decides whether or not to issue a loan. The lender only observes whether a customer will repay a loan if the loan is issued to begin with, and thus modeled decisions affect what data is available to the lender for future decisions. As a result, it is possible for the lender's algorithm to ``get stuck'' with a self-fulfilling model. This model never corrects its false negatives, since it never sees the true label for rejected data, thus accumulating infinite regret. In the case of linear models, this issue can be addressed by adding optimism directly into the model predictions. However, there are few methods that extend to the function approximation case using Deep Neural Networks. We present Pseudo-Label Optimism (PLOT), a conceptually and computationally simple method for this setting applicable to DNNs. \PLOT{} adds an optimistic label to the subset of decision points the current model is deciding on, trains the model on all data so far (including these points along with their optimistic labels), and finally uses the resulting \emph{optimistic} model for decision making. \PLOT{} achieves competitive performance on a set of three challenging benchmark problems, requiring minimal hyperparameter tuning. We also show that \PLOT{} satisfies a logarithmic regret guarantee, under a Lipschitz and logistic mean label model, and under a separability condition on the data.
null
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
https://papers.nips.cc/paper_files/paper/2021/hash/33ebd5b07dc7e407752fe773eed20635-Abstract.html
Mingjie Li, Shaobo Wang, Quanshi Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/33ebd5b07dc7e407752fe773eed20635-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12128-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/33ebd5b07dc7e407752fe773eed20635-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8gyF7P-kEud
https://papers.nips.cc/paper_files/paper/2021/file/33ebd5b07dc7e407752fe773eed20635-Supplemental.pdf
This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN. Specifically, we visualize (1) how the DNN gradually learns regional visual patterns in each intermediate layer during the training process, and (2) the effects of the DNN using non-discriminative patterns in low layers to construct disciminative patterns in middle/high layers through the forward propagation. Based on our visualization method, we can quantify knowledge points (i.e. the number of discriminative visual patterns) learned by the DNN to evaluate the representation capacity of the DNN. Furthermore, this method also provides new insights into signal-processing behaviors of existing deep-learning techniques, such as adversarial attacks and knowledge distillation.
null
Learning 3D Dense Correspondence via Canonical Point Autoencoder
https://papers.nips.cc/paper_files/paper/2021/hash/3413ce14d52b87557e87e2c1518c2cbe-Abstract.html
An-Chieh Cheng, Xueting Li, Min Sun, Ming-Hsuan Yang, Sifei Liu
https://papers.nips.cc/paper_files/paper/2021/hash/3413ce14d52b87557e87e2c1518c2cbe-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12129-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3413ce14d52b87557e87e2c1518c2cbe-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vLVEZr_66Ik
https://papers.nips.cc/paper_files/paper/2021/file/3413ce14d52b87557e87e2c1518c2cbe-Supplemental.zip
We propose a canonical point autoencoder (CPAE) that predicts dense correspondences between 3D shapes of the same category. The autoencoder performs two key functions: (a) encoding an arbitrarily ordered point cloud to a canonical primitive, e.g., a sphere, and (b) decoding the primitive back to the original input instance shape. As being placed in the bottleneck, this primitive plays a key role to map all the unordered point clouds on the canonical surface, and to be reconstructed in an ordered fashion. Once trained, points from different shape instances that are mapped to the same locations on the primitive surface are determined to be a pair of correspondence. Our method does not require any form of annotation or self-supervised part segmentation network and can handle unaligned input point clouds within a certain rotation range. Experimental results on 3D semantic keypoint transfer and part segmentation transfer show that our model performs favorably against state-of-the-art correspondence learning methods.
null
Speech-T: Transducer for Text to Speech and Beyond
https://papers.nips.cc/paper_files/paper/2021/hash/344ef5151be171062f42f03e69663ecf-Abstract.html
Jiawei Chen, Xu Tan, Yichong Leng, Jin Xu, Guihua Wen, Tao Qin, Tie-Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/344ef5151be171062f42f03e69663ecf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12130-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/344ef5151be171062f42f03e69663ecf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jyMpZyqrvYz
https://papers.nips.cc/paper_files/paper/2021/file/344ef5151be171062f42f03e69663ecf-Supplemental.pdf
Neural Transducer (e.g., RNN-T) has been widely used in automatic speech recognition (ASR) due to its capabilities of efficiently modeling monotonic alignments between input and output sequences and naturally supporting streaming inputs. Considering that monotonic alignments are also critical to text to speech (TTS) synthesis and streaming TTS is also an important application scenario, in this work, we explore the possibility of applying Transducer to TTS and more. However, it is challenging because it is difficult to trade off the emission (continuous mel-spectrogram prediction) probability and transition (ASR Transducer predicts blank token to indicate transition to next input) probability when calculating the output probability lattice in Transducer, and it is not easy to learn the alignments between text and speech through the output probability lattice. We propose SpeechTransducer (Speech-T for short), a Transformer based Transducer model that 1) uses a new forward algorithm to separate the transition prediction from the continuous mel-spectrogram prediction when calculating the output probability lattice, and uses a diagonal constraint in the probability lattice to help the alignment learning; 2) supports both full-sentence or streaming TTS by adjusting the look-ahead context; and 3) further supports both TTS and ASR together for the first time, which enjoys several advantages including fewer parameters as well as streaming synthesis and recognition in a single model. Experiments on LJSpeech datasets demonstrate that Speech-T 1) is more robust than the attention based autoregressive TTS model due to its inherent monotonic alignments between text and speech; 2) naturally supports streaming TTS with good voice quality; and 3) enjoys the benefit of joint modeling TTS and ASR in a single network.
null
Multi-modal Dependency Tree for Video Captioning
https://papers.nips.cc/paper_files/paper/2021/hash/3473decccb0509fb264818a7512a8b9b-Abstract.html
Wentian Zhao, Xinxiao Wu, Jiebo Luo
https://papers.nips.cc/paper_files/paper/2021/hash/3473decccb0509fb264818a7512a8b9b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12131-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3473decccb0509fb264818a7512a8b9b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sW40wkwfsZp
https://papers.nips.cc/paper_files/paper/2021/file/3473decccb0509fb264818a7512a8b9b-Supplemental.pdf
Generating fluent and relevant language to describe visual content is critical for the video captioning task. Many existing methods generate captions using sequence models that predict words in a left-to-right order. In this paper, we investigate a graph-structured model for caption generation by explicitly modeling the hierarchical structure in the sentences to further improve the fluency and relevance of sentences. To this end, we propose a novel video captioning method that generates a sentence by first constructing a multi-modal dependency tree and then traversing the constructed tree, where the syntactic structure and semantic relationship in the sentence are represented by the tree topology. To take full advantage of the information from both vision and language, both the visual and textual representation features are encoded into each tree node. Different from existing dependency parsing methods that generate uni-modal dependency trees for language understanding, our method construct s multi-modal dependency trees for language generation of images and videos. We also propose a tree-structured reinforcement learning algorithm to effectively optimize the captioning model where a novel reward is designed by evaluating the semantic consistency between the generated sub-tree and the ground-truth tree. Extensive experiments on several video captioning datasets demonstrate the effectiveness of the proposed method.
null
Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence
https://papers.nips.cc/paper_files/paper/2021/hash/347665597cbfaef834886adbb848011f-Abstract.html
Dachao Lin, Haishan Ye, Zhihua Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/347665597cbfaef834886adbb848011f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12132-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/347665597cbfaef834886adbb848011f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=cI4c6OpwIKq
https://papers.nips.cc/paper_files/paper/2021/file/347665597cbfaef834886adbb848011f-Supplemental.pdf
In this paper, we follow Rodomanov and Nesterov’s work to study quasi-Newton methods. We focus on the common SR1 and BFGS quasi-Newton methods to establish better explicit (local) superlinear convergence rates. First, based on the greedy quasi-Newton update which greedily selects the direction to maximize a certain measure of progress, we improve the convergence rate to a condition-number-free superlinear convergence rate. Second, based on the random quasi-Newton update that selects the direction randomly from a spherically symmetric distribution, we show the same superlinear convergence rate established as above. Our analysis is closely related to the approximation of a given Hessian matrix, unconstrained quadratic objective, as well as the general strongly convex, smooth, and strongly self-concordant functions.
null
Neural Tangent Kernel Maximum Mean Discrepancy
https://papers.nips.cc/paper_files/paper/2021/hash/348a38cd25abeab0e440f37510e9b1fa-Abstract.html
Xiuyuan Cheng, Yao Xie
https://papers.nips.cc/paper_files/paper/2021/hash/348a38cd25abeab0e440f37510e9b1fa-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12133-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/348a38cd25abeab0e440f37510e9b1fa-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=i2pFtDzmPL6
null
We present a novel neural network Maximum Mean Discrepancy (MMD) statistic by identifying a new connection between neural tangent kernel (NTK) and MMD. This connection enables us to develop a computationally efficient and memory-efficient approach to compute the MMD statistic and perform NTK based two-sample tests towards addressing the long-standing challenge of memory and computational complexity of the MMD statistic, which is essential for online implementation to assimilating new samples. Theoretically, such a connection allows us to understand the NTK test statistic properties, such as the Type-I error and testing power for performing the two-sample test, by adapting existing theories for kernel MMD. Numerical experiments on synthetic and real-world datasets validate the theory and demonstrate the effectiveness of the proposed NTK-MMD statistic.
null
Subgraph Federated Learning with Missing Neighbor Generation
https://papers.nips.cc/paper_files/paper/2021/hash/34adeb8e3242824038aa65460a47c29e-Abstract.html
Ke ZHANG, Carl Yang, Xiaoxiao Li, Lichao Sun, Siu Ming Yiu
https://papers.nips.cc/paper_files/paper/2021/hash/34adeb8e3242824038aa65460a47c29e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12134-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/34adeb8e3242824038aa65460a47c29e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SJHRf5nW93
https://papers.nips.cc/paper_files/paper/2021/file/34adeb8e3242824038aa65460a47c29e-Supplemental.pdf
Graphs have been widely used in data mining and machine learning due to their unique representation of real-world objects and their interactions. As graphs are getting bigger and bigger nowadays, it is common to see their subgraphs separately collected and stored in multiple local systems. Therefore, it is natural to consider the subgraph federated learning setting, where each local system holds a small subgraph that may be biased from the distribution of the whole graph. Hence, the subgraph federated learning aims to collaboratively train a powerful and generalizable graph mining model without directly sharing their graph data. In this work, towards the novel yet realistic setting of subgraph federated learning, we propose two major techniques: (1) FedSage, which trains a GraphSage model based on FedAvg to integrate node features, link structures, and task labels on multiple local subgraphs; (2) FedSage+, which trains a missing neighbor generator along FedSage to deal with missing links across local subgraphs. Empirical results on four real-world graph datasets with synthesized subgraph federated learning settings demonstrate the effectiveness and efficiency of our proposed techniques. At the same time, consistent theoretical implications are made towards their generalization ability on the global graphs.
null
Bellman-consistent Pessimism for Offline Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/34f98c7c5d7063181da890ea8d25265a-Abstract.html
Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal
https://papers.nips.cc/paper_files/paper/2021/hash/34f98c7c5d7063181da890ea8d25265a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12135-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/34f98c7c5d7063181da890ea8d25265a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=e8WWUBeafM
https://papers.nips.cc/paper_files/paper/2021/file/34f98c7c5d7063181da890ea8d25265a-Supplemental.pdf
The use of pessimism, when reasoning about datasets lacking exhaustive exploration has recently gained prominence in offline reinforcement learning. Despite the robustness it adds to the algorithm, overly pessimistic reasoning can be equally damaging in precluding the discovery of good policies, which is an issue for the popular bonus-based pessimism. In this paper, we introduce the notion of Bellman-consistent pessimism for general function approximation: instead of calculating a point-wise lower bound for the value function, we implement pessimism at the initial state over the set of functions consistent with the Bellman equations. Our theoretical guarantees only require Bellman closedness as standard in the exploratory setting, in which case bonus-based pessimism fails to provide guarantees. Even in the special case of linear function approximation where stronger expressivity assumptions hold, our result improves upon a recent bonus-based approach by $\mathcal O(d)$ in its sample complexity (when the action space is finite). Remarkably, our algorithms automatically adapt to the best bias-variance tradeoff in the hindsight, whereas most prior approaches require tuning extra hyperparameters a priori.
null
Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
https://papers.nips.cc/paper_files/paper/2021/hash/3501672ebc68a5524629080e3ef60aef-Abstract.html
Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, Tom Goldstein
https://papers.nips.cc/paper_files/paper/2021/hash/3501672ebc68a5524629080e3ef60aef-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12136-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3501672ebc68a5524629080e3ef60aef-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Tsp2PL7-GQ
https://papers.nips.cc/paper_files/paper/2021/file/3501672ebc68a5524629080e3ef60aef-Supplemental.pdf
Deep neural networks are powerful machines for visual pattern recognition, but reasoning tasks that are easy for humans may still be difficult for neural models. Humans possess the ability to extrapolate reasoning strategies learned on simple problems to solve harder examples, often by thinking for longer. For example, a person who has learned to solve small mazes can easily extend the very same search techniques to solve much larger mazes by spending more time. In computers, this behavior is often achieved through the use of algorithms, which scale to arbitrarily hard problem instances at the cost of more computation. In contrast, the sequential computing budget of feed-forward neural networks is limited by their depth, and networks trained on simple problems have no way of extending their reasoning to accommodate harder problems. In this work, we show that recurrent networks trained to solve simple problems with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference. We demonstrate this algorithmic behavior of recurrent networks on prefix sum computation, mazes, and chess. In all three domains, networks trained on simple problem instances are able to extend their reasoning abilities at test time simply by "thinking for longer."
null
Sub-Linear Memory: How to Make Performers SLiM
https://papers.nips.cc/paper_files/paper/2021/hash/35309226eb45ec366ca86a4329a2b7c3-Abstract.html
Valerii Likhosherstov, Krzysztof M. Choromanski, Jared Quincy Davis, Xingyou Song, Adrian Weller
https://papers.nips.cc/paper_files/paper/2021/hash/35309226eb45ec366ca86a4329a2b7c3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12137-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/35309226eb45ec366ca86a4329a2b7c3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OfdQxpZbQMB
https://papers.nips.cc/paper_files/paper/2021/file/35309226eb45ec366ca86a4329a2b7c3-Supplemental.pdf
Transformer architectures have become very popular yet the original implementation requires $O(L^2)$ in serial time and memory as functions of input length $L$. Recent works proposed various linear self-attention mechanisms, scaling only as $O(L)$ for serial computation. We conduct a thorough complexity analysis of Performers, a class which includes most recent linear Transformer mechanisms. We note a remarkable computational flexibility: the gradient computation can be performed with no approximations using sublinear memory as a function of $L$ (in addition to negligible storage for the input sequence), at a cost of greater time complexity in the parallel setting. In the extreme case, a Performer consumes only $O(1)$ memory, and still requires $O(L)$ time. Due to complete backward-compatibility, this discovered time-memory tradeoff can be used for fine-tuning on low-memory devices in a decentralized fashion without any server computations.
null
Efficient Learning of Discrete-Continuous Computation Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/3556a3018cce3076e27dbbf9645b44d5-Abstract.html
David Friede, Mathias Niepert
https://papers.nips.cc/paper_files/paper/2021/hash/3556a3018cce3076e27dbbf9645b44d5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12138-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3556a3018cce3076e27dbbf9645b44d5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=TLIHuw3gcQB
https://papers.nips.cc/paper_files/paper/2021/file/3556a3018cce3076e27dbbf9645b44d5-Supplemental.pdf
Numerous models for supervised and reinforcement learning benefit from combinations of discrete and continuous model components. End-to-end learnable discrete-continuous models are compositional, tend to generalize better, and are more interpretable. A popular approach to building discrete-continuous computation graphs is that of integrating discrete probability distributions into neural networks using stochastic softmax tricks. Prior work has mainly focused on computation graphs with a single discrete component on each of the graph's execution paths. We analyze the behavior of more complex stochastic computations graphs with multiple sequential discrete components. We show that it is challenging to optimize the parameters of these models, mainly due to small gradients and local minima. We then propose two new strategies to overcome these challenges. First, we show that increasing the scale parameter of the Gumbel noise perturbations during training improves the learning behavior. Second, we propose dropout residual connections specifically tailored to stochastic, discrete-continuous computation graphs. With an extensive set of experiments, we show that we can train complex discrete-continuous models which one cannot train with standard stochastic softmax tricks. We also show that complex discrete-stochastic models generalize better than their continuous counterparts on several benchmark datasets.
null
VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization
https://papers.nips.cc/paper_files/paper/2021/hash/3569df159ec477451530c4455b2a9e86-Abstract.html
Mucong Ding, Kezhi Kong, Jingling Li, Chen Zhu, John Dickerson, Furong Huang, Tom Goldstein
https://papers.nips.cc/paper_files/paper/2021/hash/3569df159ec477451530c4455b2a9e86-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12139-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3569df159ec477451530c4455b2a9e86-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EO-CQzgcIxd
https://papers.nips.cc/paper_files/paper/2021/file/3569df159ec477451530c4455b2a9e86-Supplemental.pdf
Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the "neighbor explosion" problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks.
null
Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima
https://papers.nips.cc/paper_files/paper/2021/hash/357cfba15668cc2e1e73111e09d54383-Abstract.html
Guangyuan SHI, JIAXIN CHEN, Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu
https://papers.nips.cc/paper_files/paper/2021/hash/357cfba15668cc2e1e73111e09d54383-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12140-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/357cfba15668cc2e1e73111e09d54383-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ALvt7nXa2q
https://papers.nips.cc/paper_files/paper/2021/file/357cfba15668cc2e1e73111e09d54383-Supplemental.pdf
This paper considers incremental few-shot learning, which requires a model to continually recognize new categories with only a few examples provided. Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting. Our analysis further suggests that to prevent catastrophic forgetting, actions need to be taken in the primitive stage -- the training of base classes instead of later few-shot learning sessions. Therefore, we propose to search for flat local minima of the base training objective function and then fine-tune the model parameters within the flat region on new tasks. In this way, the model can efficiently learn new classes while preserving the old ones. Comprehensive experimental results demonstrate that our approach outperforms all prior state-of-the-art methods and is very close to the approximate upper bound. The source code is available at https://github.com/moukamisama/F2M.
null
Functional Neural Networks for Parametric Image Restoration Problems
https://papers.nips.cc/paper_files/paper/2021/hash/35936504a37d53e03abdfbc7318d9ec7-Abstract.html
Fangzhou Luo, Xiaolin Wu, Yanhui Guo
https://papers.nips.cc/paper_files/paper/2021/hash/35936504a37d53e03abdfbc7318d9ec7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12141-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/35936504a37d53e03abdfbc7318d9ec7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MMZ4djXrwbu
null
Almost every single image restoration problem has a closely related parameter, such as the scale factor in super-resolution, the noise level in image denoising, and the quality factor in JPEG deblocking. Although recent studies on image restoration problems have achieved great success due to the development of deep neural networks, they handle the parameter involved in an unsophisticated way. Most previous researchers either treat problems with different parameter levels as independent tasks, and train a specific model for each parameter level; or simply ignore the parameter, and train a single model for all parameter levels. The two popular approaches have their own shortcomings. The former is inefficient in computing and the latter is ineffective in performance. In this work, we propose a novel system called functional neural network (FuncNet) to solve a parametric image restoration problem with a single model. Unlike a plain neural network, the smallest conceptual element of our FuncNet is no longer a floating-point variable, but a function of the parameter of the problem. This feature makes it both efficient and effective for a parametric problem. We apply FuncNet to super-resolution, image denoising, and JPEG deblocking. The experimental results show the superiority of our FuncNet on all three parametric image restoration tasks over the state of the arts.
null
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/35a12c43227f217207d4e06ffefe39d3-Abstract.html
Tolga Birdal, Aaron Lou, Leonidas J. Guibas, Umut Simsekli
https://papers.nips.cc/paper_files/paper/2021/hash/35a12c43227f217207d4e06ffefe39d3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12142-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/35a12c43227f217207d4e06ffefe39d3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=099uYP0EKsJ
https://papers.nips.cc/paper_files/paper/2021/file/35a12c43227f217207d4e06ffefe39d3-Supplemental.pdf
Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess \emph{fractal structures}, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal's \emph{intrinsic dimension}, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (\eg, for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the 'persistent homology dimension' (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network's intrinsic dimension in a variety of settings, which is predictive of the generalization error.
null
GemNet: Universal Directional Graph Neural Networks for Molecules
https://papers.nips.cc/paper_files/paper/2021/hash/35cf8659cfcb13224cbd47863a34fc58-Abstract.html
Johannes Gasteiger, Florian Becker, Stephan Günnemann
https://papers.nips.cc/paper_files/paper/2021/hash/35cf8659cfcb13224cbd47863a34fc58-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12143-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/35cf8659cfcb13224cbd47863a34fc58-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HS_sOaxS9K-
https://papers.nips.cc/paper_files/paper/2021/file/35cf8659cfcb13224cbd47863a34fc58-Supplemental.pdf
Effectively predicting molecular interactions has the potential to accelerate molecular dynamics by multiple orders of magnitude and thus revolutionize chemical simulations. Graph neural networks (GNNs) have recently shown great successes for this task, overtaking classical methods based on fixed molecular kernels. However, they still appear very limited from a theoretical perspective, since regular GNNs cannot distinguish certain types of graphs. In this work we close this gap between theory and practice. We show that GNNs with directed edge embeddings and two-hop message passing are indeed universal approximators for predictions that are invariant to translation, and equivariant to permutation and rotation. We then leverage these insights and multiple structural improvements to propose the geometric message passing neural network (GemNet). We demonstrate the benefits of the proposed changes in multiple ablation studies. GemNet outperforms previous models on the COLL, MD17, and OC20 datasets by 34%, 41%, and 20%, respectively, and performs especially well on the most challenging molecules. Our implementation is available online.
null
Loss function based second-order Jensen inequality and its application to particle variational inference
https://papers.nips.cc/paper_files/paper/2021/hash/36165c62f7b7df72863d470d73302627-Abstract.html
Futoshi Futami, Tomoharu Iwata, naonori ueda, Issei Sato, Masashi Sugiyama
https://papers.nips.cc/paper_files/paper/2021/hash/36165c62f7b7df72863d470d73302627-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12144-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/36165c62f7b7df72863d470d73302627-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=St4i_-UoQWQ
https://papers.nips.cc/paper_files/paper/2021/file/36165c62f7b7df72863d470d73302627-Supplemental.pdf
Bayesian model averaging, obtained as the expectation of a likelihood function by a posterior distribution, has been widely used for prediction, evaluation of uncertainty, and model selection. Various approaches have been developed to efficiently capture the information in the posterior distribution; one such approach is the optimization of a set of models simultaneously with interaction to ensure the diversity of the individual models in the same way as ensemble learning. A representative approach is particle variational inference (PVI), which uses an ensemble of models as an empirical approximation for the posterior distribution. PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models. However, despite its promising performance, a theoretical understanding of this repulsion and its association with the generalization ability remains unclear. In this paper, we tackle this problem in light of PAC-Bayesian analysis. First, we provide a new second-order Jensen inequality, which has the repulsion term based on the loss function. Thanks to the repulsion term, it is tighter than the standard Jensen inequality. Then, we derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models. Finally, we derive a new PVI that optimizes the generalization error bound directly. Numerical experiments demonstrate that the performance of the proposed PVI compares favorably with existing methods in the experiment.
null
Detecting and Adapting to Irregular Distribution Shifts in Bayesian Online Learning
https://papers.nips.cc/paper_files/paper/2021/hash/362387494f6be6613daea643a7706a42-Abstract.html
Aodong Li, Alex Boyd, Padhraic Smyth, Stephan Mandt
https://papers.nips.cc/paper_files/paper/2021/hash/362387494f6be6613daea643a7706a42-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12145-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/362387494f6be6613daea643a7706a42-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-440wKL2oJV
https://papers.nips.cc/paper_files/paper/2021/file/362387494f6be6613daea643a7706a42-Supplemental.pdf
We consider the problem of online learning in the presence of distribution shifts that occur at an unknown rate and of unknown intensity. We derive a new Bayesian online inference approach to simultaneously infer these distribution shifts and adapt the model to the detected changes by integrating ideas from change point detection, switching dynamical systems, and Bayesian online learning. Using a binary ‘change variable,’ we construct an informative prior such that--if a change is detected--the model partially erases the information of past model updates by tempering to facilitate adaptation to the new data distribution. Furthermore, the approach uses beam search to track multiple change-point hypotheses and selects the most probable one in hindsight. Our proposed method is model-agnostic, applicable in both supervised and unsupervised learning settings, suitable for an environment of concept drifts or covariate drifts, and yields improvements over state-of-the-art Bayesian online learning approaches.
null
Asynchronous Decentralized SGD with Quantized and Local Updates
https://papers.nips.cc/paper_files/paper/2021/hash/362c99307cdc3f2d8b410652386a9dd1-Abstract.html
Giorgi Nadiradze, Amirmojtaba Sabour, Peter Davies, Shigang Li, Dan Alistarh
https://papers.nips.cc/paper_files/paper/2021/hash/362c99307cdc3f2d8b410652386a9dd1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12146-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/362c99307cdc3f2d8b410652386a9dd1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9x10Q5J8e9W
https://papers.nips.cc/paper_files/paper/2021/file/362c99307cdc3f2d8b410652386a9dd1-Supplemental.pdf
Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes \emph{global} communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, \emph{asynchronous gossip} model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called \emph{SwarmSGD} still converges in this setting, even if \emph{non-blocking communication}, \emph{quantization}, and \emph{local steps} are all applied \emph{in conjunction}, and even if the node data distributions and underlying graph topology are both \emph{heterogenous}. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks.
null
Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret
https://papers.nips.cc/paper_files/paper/2021/hash/367147f1755502d9bc6189f8e2c3005d-Abstract.html
Jean Tarbouriech, Runlong Zhou, Simon S. Du, Matteo Pirotta, Michal Valko, Alessandro Lazaric
https://papers.nips.cc/paper_files/paper/2021/hash/367147f1755502d9bc6189f8e2c3005d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12147-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/367147f1755502d9bc6189f8e2c3005d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=cc_AXK6rWPJ
https://papers.nips.cc/paper_files/paper/2021/file/367147f1755502d9bc6189f8e2c3005d-Supplemental.pdf
We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to induce an optimistic SSP problem whose associated value iteration scheme is guaranteed to converge. We prove that EB-SSP achieves the minimax regret rate $\widetilde{O}(B_{\star} \sqrt{S A K})$, where $K$ is the number of episodes, $S$ is the number of states, $A$ is the number of actions and $B_{\star}$ bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of $B_{\star}$, nor of $T_{\star}$, which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of $T_{\star}$ is available) where the regret only contains a logarithmic dependence on $T_{\star}$, thus yielding the first (nearly) horizon-free regret bound beyond the finite-horizon MDP setting.
null
Nested Counterfactual Identification from Arbitrary Surrogate Experiments
https://papers.nips.cc/paper_files/paper/2021/hash/36bedb6eb7152f39b16328448942822b-Abstract.html
Juan Correa, Sanghack Lee, Elias Bareinboim
https://papers.nips.cc/paper_files/paper/2021/hash/36bedb6eb7152f39b16328448942822b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12148-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/36bedb6eb7152f39b16328448942822b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=z3tlL2MeTK2
null
The Ladder of Causation describes three qualitatively different types of activities an agent may be interested in engaging in, namely, seeing (observational), doing (interventional), and imagining (counterfactual) (Pearl and Mackenzie, 2018). The inferential challenge imposed by the causal hierarchy is that data is collected by an agent observing or intervening in a system (layers 1 and 2), while its goal may be to understand what would have happened had it taken a different course of action, contrary to what factually ended up happening (layer 3). While there exists a solid understanding of the conditions under which cross-layer inferences are allowed from observations to interventions, the results are somewhat scarcer when targeting counterfactual quantities. In this paper, we study the identification of nested counterfactuals from an arbitrary combination of observations and experiments. Specifically, building on a more explicit definition of nested counterfactuals, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones. For instance, applications in mediation and fairness analysis usually evoke notions of direct, indirect, and spurious effects, which naturally require nesting. Second, we introduce a sufficient and necessary graphical condition for counterfactual identification from an arbitrary combination of observational and experimental distributions. Lastly, we develop an efficient and complete algorithm for identifying nested counterfactuals; failure of the algorithm returning an expression for a query implies it is not identifiable.
null
Sim and Real: Better Together
https://papers.nips.cc/paper_files/paper/2021/hash/36f4d832825380f102846560a5104c90-Abstract.html
Shirli Di-Castro, Dotan Di Castro, Shie Mannor
https://papers.nips.cc/paper_files/paper/2021/hash/36f4d832825380f102846560a5104c90-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12149-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/36f4d832825380f102846560a5104c90-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=t0B9XQwRDi
https://papers.nips.cc/paper_files/paper/2021/file/36f4d832825380f102846560a5104c90-Supplemental.pdf
Simulation is used extensively in autonomous systems, particularly in robotic manipulation. By far, the most common approach is to train a controller in simulation, and then use it as an initial starting point for the real system. We demonstrate how to learn simultaneously from both simulation and interaction with the real environment. We propose an algorithm for balancing the large number of samples from the high throughput but less accurate simulation and the low-throughput, high-fidelity and costly samples from the real environment. We achieve that by maintaining a replay buffer for each environment the agent interacts with. We analyze such multi-environment interaction theoretically, and provide convergence properties, through a novel theoretical replay buffer analysis. We demonstrate the efficacy of our method on a sim-to-real environment.
null
Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma Distributions
https://papers.nips.cc/paper_files/paper/2021/hash/371bce7dc83817b7893bcdeed13799b5-Abstract.html
Huan Ma, Zongbo Han, Changqing Zhang, Huazhu Fu, Joey Tianyi Zhou, Qinghua Hu
https://papers.nips.cc/paper_files/paper/2021/hash/371bce7dc83817b7893bcdeed13799b5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12150-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/371bce7dc83817b7893bcdeed13799b5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EckG_zyssVj
https://papers.nips.cc/paper_files/paper/2021/file/371bce7dc83817b7893bcdeed13799b5-Supplemental.pdf
Multimodal regression is a fundamental task, which integrates the information from different sources to improve the performance of follow-up applications. However, existing methods mainly focus on improving the performance and often ignore the confidence of prediction for diverse situations. In this study, we are devoted to trustworthy multimodal regression which is critical in cost-sensitive domains. To this end, we introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result. Our model can be dynamically aware of uncertainty for each modality, and also robust for corrupted modalities. Furthermore, the proposed MoNIG ensures explicitly representation of (modality-specific/global) epistemic and aleatoric uncertainties, respectively. Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks (e.g., temperature prediction for superconductivity, relative location prediction for CT slices, and multimodal sentiment analysis).
null
An Empirical Study of Adder Neural Networks for Object Detection
https://papers.nips.cc/paper_files/paper/2021/hash/37693cfc748049e45d87b8c7d8b9aacd-Abstract.html
Xinghao Chen, Chang Xu, Minjing Dong, Chunjing XU, Yunhe Wang
https://papers.nips.cc/paper_files/paper/2021/hash/37693cfc748049e45d87b8c7d8b9aacd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12151-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/37693cfc748049e45d87b8c7d8b9aacd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HRE7guiwMgG
https://papers.nips.cc/paper_files/paper/2021/file/37693cfc748049e45d87b8c7d8b9aacd-Supplemental.pdf
Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations, which are more energy efficient than traditional convolutional neural networks built with multiplications. Compared with classification, there is a strong demand on reducing the energy consumption of modern object detectors via AdderNets for real-world applications such as autonomous driving and face detection. In this paper, we present an empirical study of AdderNets for object detection. We first reveal that the batch normalization statistics in the pre-trained adder backbone should not be frozen, since the relatively large feature variance of AdderNets. Moreover, we insert more shortcut connections in the neck part and design a new feature fusion architecture for avoiding the sparse features of adder layers. We present extensive ablation studies to explore several design choices of adder detectors. Comparisons with state-of-the-arts are conducted on COCO and PASCAL VOC benchmarks. Specifically, the proposed Adder FCOS achieves a 37.8% AP on the COCO val set, demonstrating comparable performance to that of the convolutional counterpart with an about $1.4\times$ energy reduction.
null
Does Knowledge Distillation Really Work?
https://papers.nips.cc/paper_files/paper/2021/hash/376c6b9ff3bedbbea56751a84fffc10c-Abstract.html
Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew G. Wilson
https://papers.nips.cc/paper_files/paper/2021/hash/376c6b9ff3bedbbea56751a84fffc10c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12152-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/376c6b9ff3bedbbea56751a84fffc10c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7J-fKoXiReA
https://papers.nips.cc/paper_files/paper/2021/file/376c6b9ff3bedbbea56751a84fffc10c-Supplemental.pdf
Knowledge distillation is a popular technique for training a small student network to emulate a larger teacher model, such as an ensemble of networks. We show that while knowledge distillation can improve student generalization, it does not typically work as it is commonly understood: there often remains a surprisingly large discrepancy between the predictive distributions of the teacher and the student, even in cases when the student has the capacity to perfectly match the teacher. We identify difficulties in optimization as a key reason for why the student is unable to match the teacher. We also show how the details of the dataset used for distillation play a role in how closely the student matches the teacher --- and that more closely matching the teacher paradoxically does not always lead to better student generalization.
null
Teachable Reinforcement Learning via Advice Distillation
https://papers.nips.cc/paper_files/paper/2021/hash/37cfff3c04f95b22bcf166df586cd7a9-Abstract.html
Olivia Watkins, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, Jacob Andreas
https://papers.nips.cc/paper_files/paper/2021/hash/37cfff3c04f95b22bcf166df586cd7a9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12153-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/37cfff3c04f95b22bcf166df586cd7a9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PPh6lqP5BO
https://papers.nips.cc/paper_files/paper/2021/file/37cfff3c04f95b22bcf166df586cd7a9-Supplemental.pdf
Training automated agents to complete complex tasks in interactive environments is challenging: reinforcement learning requires careful hand-engineering of reward functions, imitation learning requires specialized infrastructure and access to a human expert, and learning from intermediate forms of supervision (like binary preferences) is time-consuming and extracts little information from each human intervention. Can we overcome these challenges by building agents that learn from rich, interactive feedback instead? We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher. We begin by formalizing a class of human-in-the-loop decision making problems in which multiple forms of teacher-provided advice are available to a learner. We then describe a simple learning algorithm for these problems that first learns to interpret advice, then learns from advice to complete tasks even in the absence of human supervision. In puzzle-solving, navigation, and locomotion domains, we show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms and often less than imitation learning.
null
Antipodes of Label Differential Privacy: PATE and ALIBI
https://papers.nips.cc/paper_files/paper/2021/hash/37ecd27608480aa3569a511a638ca74f-Abstract.html
Mani Malek Esmaeili, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramer
https://papers.nips.cc/paper_files/paper/2021/hash/37ecd27608480aa3569a511a638ca74f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12154-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/37ecd27608480aa3569a511a638ca74f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sR1XB9-F-rv
https://papers.nips.cc/paper_files/paper/2021/file/37ecd27608480aa3569a511a638ca74f-Supplemental.pdf
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP) with respect to the labels of the training examples. We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.While recent work by Ghazi et al. proposed Label DP schemes based on a randomized response mechanism, we argue that additive Laplace noise coupled with Bayesian inference (ALIBI) is a better fit for typical ML tasks. Moreover, we show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework that builds on recent advances in semi-supervised learning.We complement theoretical analysis of our algorithms' privacy guarantees with empirical evaluation of their memorization properties. Our evaluation suggests that comparing different algorithms according to their provable DP guarantees can be misleading and favor a less private algorithm with a tighter analysis.Code for implementation of algorithms and memorization attacks is available from https://github.com/facebookresearch/labeldpantipodes.
null
Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
https://papers.nips.cc/paper_files/paper/2021/hash/37f0e884fbad9667e38940169d0a3c95-Abstract.html
Shashi Kant Gupta, Mengmi Zhang, CHIA-CHIEN WU, Jeremy Wolfe, Gabriel Kreiman
https://papers.nips.cc/paper_files/paper/2021/hash/37f0e884fbad9667e38940169d0a3c95-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12155-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/37f0e884fbad9667e38940169d0a3c95-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ar85GL0N11
https://papers.nips.cc/paper_files/paper/2021/file/37f0e884fbad9667e38940169d0a3c95-Supplemental.pdf
Visual search is a ubiquitous and often challenging daily task, exemplified by looking for the car keys at home or a friend in a crowd. An intriguing property of some classical search tasks is an asymmetry such that finding a target A among distractors B can be easier than finding B among A. To elucidate the mechanisms responsible for asymmetry in visual search, we propose a computational model that takes a target and a search image as inputs and produces a sequence of eye movements until the target is found. The model integrates eccentricity-dependent visual recognition with target-dependent top-down cues. We compared the model against human behavior in six paradigmatic search tasks that show asymmetry in humans. Without prior exposure to the stimuli or task-specific training, the model provides a plausible mechanism for search asymmetry. We hypothesized that the polarity of search asymmetry arises from experience with the natural environment. We tested this hypothesis by training the model on augmented versions of ImageNet where the biases of natural images were either removed or reversed. The polarity of search asymmetry disappeared or was altered depending on the training protocol. This study highlights how classical perceptual properties can emerge in neural network models, without the need for task-specific training, but rather as a consequence of the statistical properties of the developmental diet fed to the model. All source code and data are publicly available at https://github.com/kreimanlab/VisualSearchAsymmetry.
null
On the Universality of Graph Neural Networks on Large Random Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/38181d991caac98be8fb2ecb8bd0f166-Abstract.html
Nicolas Keriven, Alberto Bietti, Samuel Vaiter
https://papers.nips.cc/paper_files/paper/2021/hash/38181d991caac98be8fb2ecb8bd0f166-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12156-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/38181d991caac98be8fb2ecb8bd0f166-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Xci6vUAGeJ
https://papers.nips.cc/paper_files/paper/2021/file/38181d991caac98be8fb2ecb8bd0f166-Supplemental.pdf
We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs. In the large graph limit, GNNs are known to converge to certain ``continuous'' models known as c-GNNs, which directly enables a study of their approximation power on random graph models. In the absence of input node features however, just as GNNs are limited by the Weisfeiler-Lehman isomorphism test, c-GNNs will be severely limited on simple random graph models. For instance, they will fail to distinguish the communities of a well-separated Stochastic Block Model (SBM) with constant degree function. Thus, we consider recently proposed architectures that augment GNNs with unique node identifiers, referred to as Structural GNNs here (SGNNs). We study the convergence of SGNNs to their continuous counterpart (c-SGNNs) in the large random graph limit, under new conditions on the node identifiers. We then show that c-SGNNs are strictly more powerful than c-GNNs in the continuous limit, and prove their universality on several random graph models of interest, including most SBMs and a large class of random geometric graphs. Our results cover both permutation-invariant and permutation-equivariant architectures.
null
Inverse Reinforcement Learning in a Continuous State Space with Formal Guarantees
https://papers.nips.cc/paper_files/paper/2021/hash/384babc3e7faa44cf1ca671b74499c3b-Abstract.html
Gregory Dexter, Kevin Bello, Jean Honorio
https://papers.nips.cc/paper_files/paper/2021/hash/384babc3e7faa44cf1ca671b74499c3b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12157-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/384babc3e7faa44cf1ca671b74499c3b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-DyvEp1VsmT
https://papers.nips.cc/paper_files/paper/2021/file/384babc3e7faa44cf1ca671b74499c3b-Supplemental.pdf
Inverse Reinforcement Learning (IRL) is the problem of finding a reward function which describes observed/known expert behavior. The IRL setting is remarkably useful for automated control, in situations where the reward function is difficult to specify manually or as a means to extract agent preference. In this work, we provide a new IRL algorithm for the continuous state space setting with unknown transition dynamics by modeling the system using a basis of orthonormal functions. Moreover, we provide a proof of correctness and formal guarantees on the sample and time complexity of our algorithm. Finally, we present synthetic experiments to corroborate our theoretical guarantees.
null
Adversarial Attacks on Graph Classifiers via Bayesian Optimisation
https://papers.nips.cc/paper_files/paper/2021/hash/38811c5285e34e2e3319ab7d9f2cfa5b-Abstract.html
Xingchen Wan, Henry Kenlay, Robin Ru, Arno Blaas, Michael A Osborne, Xiaowen Dong
https://papers.nips.cc/paper_files/paper/2021/hash/38811c5285e34e2e3319ab7d9f2cfa5b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12158-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/38811c5285e34e2e3319ab7d9f2cfa5b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=eXxnkL3QfDY
null
Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models.
null
Regulating algorithmic filtering on social media
https://papers.nips.cc/paper_files/paper/2021/hash/38b4f06e27fd4f6fdcceabc6f5c068ea-Abstract.html
Sarah Cen, Devavrat Shah
https://papers.nips.cc/paper_files/paper/2021/hash/38b4f06e27fd4f6fdcceabc6f5c068ea-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12159-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/38b4f06e27fd4f6fdcceabc6f5c068ea-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8RnRLP4SHe0
https://papers.nips.cc/paper_files/paper/2021/file/38b4f06e27fd4f6fdcceabc6f5c068ea-Supplemental.pdf
By filtering the content that users see, social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences. This influence has drawn scrutiny, with many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging. In this work, we examine three questions. First, given a regulation, how would one design an audit to enforce it? Second, does the audit impose a performance cost on the platform? Third, how does the audit affect the content that the platform is incentivized to filter? In response to these questions, we propose a method such that, given a regulation, an auditor can test whether that regulation is met with only black-box access to the filtering algorithm. We then turn to the platform's perspective. The platform's goal is to maximize an objective function while meeting regulation. We find that there are conditions under which the regulation does not place a high performance cost on the platform and, notably, that content diversity can play a key role in aligning the interests of the platform and regulators.
null
argmax centroid
https://papers.nips.cc/paper_files/paper/2021/hash/38eb982ee635354d3febf457beeee736-Abstract.html
Chengyue Gong, Mao Ye, Qiang Liu
https://papers.nips.cc/paper_files/paper/2021/hash/38eb982ee635354d3febf457beeee736-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12160-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/38eb982ee635354d3febf457beeee736-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=UDaab5xzpj
https://papers.nips.cc/paper_files/paper/2021/file/38eb982ee635354d3febf457beeee736-Supplemental.pdf
We propose a general method to construct centroid approximation for the distribution of maximum points of a random function (a.k.a. argmax distribution), which finds broad applications in machine learning. Our method optimizes a set of centroid points to compactly approximate the argmax distribution with a simple objective function, without explicitly drawing exact samples from the argmax distribution. Theoretically, the argmax centroid method can be shown to minimize a surrogate of Wasserstein distance between the ground-truth argmax distribution and the centroid approximation under proper conditions. We demonstrate the applicability and effectiveness of our method on a variety of real-world multi-task learning applications, including few-shot image classification, personalized dialogue systems and multi-target domain adaptation.
null
Contrastive Learning of Global and Local Video Representations
https://papers.nips.cc/paper_files/paper/2021/hash/38ef4b66cb25e92abe4d594acb841471-Abstract.html
shuang ma, Zhaoyang Zeng, Daniel McDuff, Yale Song
https://papers.nips.cc/paper_files/paper/2021/hash/38ef4b66cb25e92abe4d594acb841471-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12161-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/38ef4b66cb25e92abe4d594acb841471-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=txWfwhc6gi
https://papers.nips.cc/paper_files/paper/2021/file/38ef4b66cb25e92abe4d594acb841471-Supplemental.zip
Contrastive learning has delivered impressive results for various tasks in the self-supervised regime. However, existing approaches optimize for learning representations specific to downstream scenarios, i.e., global representations suitable for tasks such as classification or local representations for tasks such as detection and localization. While they produce satisfactory results in the intended downstream scenarios, they often fail to generalize to tasks that they were not originally designed for. In this work, we propose to learn video representations that generalize to both the tasks which require global semantic information (e.g., classification) and the tasks that require local fine-grained spatio-temporal information (e.g., localization). We achieve this by optimizing two contrastive objectives that together encourage our model to learn global-local visual information given audio signals. We show that the two objectives mutually improve the generalizability of the learned global-local representations, significantly outperforming their disjointly learned counterparts. We demonstrate our approach on various tasks including action/sound classification, lipreading, deepfake detection, event and sound localization.
null
BooVI: Provably Efficient Bootstrapped Value Iteration
https://papers.nips.cc/paper_files/paper/2021/hash/39144da5a6180c47885443c83547ec14-Abstract.html
Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang
https://papers.nips.cc/paper_files/paper/2021/hash/39144da5a6180c47885443c83547ec14-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12162-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/39144da5a6180c47885443c83547ec14-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=qKRr_rNCEPz
https://papers.nips.cc/paper_files/paper/2021/file/39144da5a6180c47885443c83547ec14-Supplemental.pdf
Despite the tremendous success of reinforcement learning (RL) with function approximation, efficient exploration remains a significant challenge, both practically and theoretically. In particular, existing theoretically grounded RL algorithms based on upper confidence bounds (UCBs), such as optimistic least-squares value iteration (LSVI), are often incompatible with practically powerful function approximators, such as neural networks. In this paper, we develop a variant of \underline{boo}tstrapped LS\underline{VI}, namely BooVI, which bridges such a gap between practice and theory. Practically, BooVI drives exploration through (re)sampling, making it compatible with general function approximators. Theoretically, BooVI inherits the worst-case $\tilde{O}(\sqrt{d^3 H^3 T})$-regret of optimistic LSVI in the episodic linear setting. Here $d$ is the feature dimension, $H$ is the episode horizon, and $T$ is the total number of steps.
null
Do Wider Neural Networks Really Help Adversarial Robustness?
https://papers.nips.cc/paper_files/paper/2021/hash/3937230de3c8041e4da6ac3246a888e8-Abstract.html
Boxi Wu, Jinghui Chen, Deng Cai, Xiaofei He, Quanquan Gu
https://papers.nips.cc/paper_files/paper/2021/hash/3937230de3c8041e4da6ac3246a888e8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12163-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3937230de3c8041e4da6ac3246a888e8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wxjtOI_8jO
https://papers.nips.cc/paper_files/paper/2021/file/3937230de3c8041e4da6ac3246a888e8-Supplemental.pdf
Adversarial training is a powerful type of defense against adversarial examples. Previous empirical results suggest that adversarial training requires wider networks for better performances. However, it remains elusive how does neural network width affect model robustness. In this paper, we carefully examine the relationship between network width and model robustness. Specifically, we show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability, which is controlled by the robust regularization parameter λ. With the same λ, wider networks can achieve better natural accuracy but worse perturbation stability, leading to a potentially worse overall model robustness. To understand the origin of this phenomenon, we further relate the perturbation stability with the network's local Lipschitzness. By leveraging recent results on neural tangent kernels, we theoretically show that wider networks tend to have worse perturbation stability. Our analyses suggest that: 1) the common strategy of first fine-tuning λ on small networks and then directly use it for wide model training could lead to deteriorated model robustness; 2) one needs to properly enlarge λ to unleash the robustness potential of wider models fully. Finally, we propose a new Width Adjusted Regularization (WAR) method that adaptively enlarges λ on wide models and significantly saves the tuning time.
null
Exploring the Limits of Out-of-Distribution Detection
https://papers.nips.cc/paper_files/paper/2021/hash/3941c4358616274ac2436eacf67fae05-Abstract.html
Stanislav Fort, Jie Ren, Balaji Lakshminarayanan
https://papers.nips.cc/paper_files/paper/2021/hash/3941c4358616274ac2436eacf67fae05-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12164-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3941c4358616274ac2436eacf67fae05-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=j5NrN8ffXC
https://papers.nips.cc/paper_files/paper/2021/file/3941c4358616274ac2436eacf67fae05-Supplemental.pdf
Near out-of-distribution detection (OOD) is a major challenge for deep neural networks. We demonstrate that large-scale pre-trained transformers can significantly improve the state-of-the-art (SOTA) on a range of near OOD tasks across different data modalities. For instance, on CIFAR-100 vs CIFAR-10 OOD detection, we improve the AUROC from 85% (current SOTA) to more than 96% using Vision Transformers pre-trained on ImageNet21k. On a challenging genomics OOD detection benchmark, we improve the AUROC from 66% to 77% using transformer and unsupervised pre-training. To further improve performance, we explore the few-shot outlier exposure setting where a few examples from outlier classes may be available; we show that pre-trained transformers are particularly well-suited for outlier exposure, and that the AUROC of OOD detection on CIFAR-100 vs CIFAR-10 can be improved to 98.7% with just 1 image per OOD class, and 99.46% with 10 images per OOD class. For multi-modal image-text pre-trained transformers such as CLIP, we explore a new way of using just the names of outlier classes as a sole source of information without any accompanying images, and show that this outperforms previous SOTA on standard OOD benchmark tasks.
null
ABC: Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning
https://papers.nips.cc/paper_files/paper/2021/hash/3953630da28e5181cffca1278517e3cf-Abstract.html
Hyuck Lee, Seungjae Shin, Heeyoung Kim
https://papers.nips.cc/paper_files/paper/2021/hash/3953630da28e5181cffca1278517e3cf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12165-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3953630da28e5181cffca1278517e3cf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1G6jPa9SKYG
https://papers.nips.cc/paper_files/paper/2021/file/3953630da28e5181cffca1278517e3cf-Supplemental.pdf
Existing semi-supervised learning (SSL) algorithms typically assume class-balanced datasets, although the class distributions of many real world datasets are imbalanced. In general, classifiers trained on a class-imbalanced dataset are biased toward the majority classes. This issue becomes more problematic for SSL algorithms because they utilize the biased prediction of unlabeled data for training. However, traditional class-imbalanced learning techniques, which are designed for labeled data, cannot be readily combined with SSL algorithms. We propose a scalable class-imbalanced SSL algorithm that can effectively use unlabeled data, while mitigating class imbalance by introducing an auxiliary balanced classifier (ABC) of a single layer, which is attached to a representation layer of an existing SSL algorithm. The ABC is trained with a class-balanced loss of a minibatch, while using high-quality representations learned from all data points in the minibatch using the backbone SSL algorithm to avoid overfitting and information loss. Moreover, we use consistency regularization, a recent SSL technique for utilizing unlabeled data in a modified way, to train the ABC to be balanced among the classes by selecting unlabeled data with the same probability for each class. The proposed algorithm achieves state-of-the-art performance in various class-imbalanced SSL experiments using four benchmark datasets.
null
BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery
https://papers.nips.cc/paper_files/paper/2021/hash/39799c18791e8d7eb29704fc5bc04ac8-Abstract.html
Chris Cundy, Aditya Grover, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2021/hash/39799c18791e8d7eb29704fc5bc04ac8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12166-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/39799c18791e8d7eb29704fc5bc04ac8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gbtDcLzwKUb
https://papers.nips.cc/paper_files/paper/2021/file/39799c18791e8d7eb29704fc5bc04ac8-Supplemental.pdf
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG).Recent advances have enabled effective maximum-likelihood point estimation of DAGs from observational data. However, a point estimate may not accurately capture the uncertainty in inferring the underlying graph in practical scenarios, wherein the true DAG is non-identifiable and/or the observed dataset is limited.We propose Bayesian Causal Discovery Nets (BCD Nets), a variational inference framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.Developing a full Bayesian posterior over DAGs is challenging due to the the discrete and combinatorial nature of graphs.We analyse key design choices for scalable VI over DAGs, such as 1) the parametrization of DAGs via an expressive variational family, 2) a continuous relaxation that enables low-variance stochastic optimization, and 3) suitable priors over the latent variables.We provide a series of experiments on real and synthetic data showing that BCD Nets outperform maximum-likelihood methods on standard causal discovery metrics such as structural Hamming distance in low data regimes.
null
Discovering Dynamic Salient Regions for Spatio-Temporal Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/398410ece9d7343091093a2a7f8ee381-Abstract.html
Iulia Duta, Andrei Nicolicioiu, Marius Leordeanu
https://papers.nips.cc/paper_files/paper/2021/hash/398410ece9d7343091093a2a7f8ee381-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12167-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/398410ece9d7343091093a2a7f8ee381-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2E4AT-qj3Dg
https://papers.nips.cc/paper_files/paper/2021/file/398410ece9d7343091093a2a7f8ee381-Supplemental.pdf
Graph Neural Networks are perfectly suited to capture latent interactions between various entities in the spatio-temporal domain (e.g. videos). However, when an explicit structure is not available, it is not obvious what atomic elements should be represented as nodes. Current works generally use pre-trained object detectors or fixed, predefined regions to extract graph nodes. Improving upon this, our proposed model learns nodes that dynamically attach to well-delimited salient regions, which are relevant for a higher-level task, without using any object-level supervision. Constructing these localized, adaptive nodes gives our model inductive bias towards object-centric representations and we show that it discovers regions that are well correlated with objects in the video. In extensive ablation studies and experiments on two challenging datasets, we show superior performance to previous graph neural networks models for video classification.
null
Information-constrained optimization: can adaptive processing of gradients help?
https://papers.nips.cc/paper_files/paper/2021/hash/398475c83b47075e8897a083e97eb9f0-Abstract.html
Jayadev Acharya, Clement Canonne, Prathamesh Mayekar, Himanshu Tyagi
https://papers.nips.cc/paper_files/paper/2021/hash/398475c83b47075e8897a083e97eb9f0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12168-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/398475c83b47075e8897a083e97eb9f0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=h7aSBWbX7S4
https://papers.nips.cc/paper_files/paper/2021/file/398475c83b47075e8897a083e97eb9f0-Supplemental.pdf
We revisit first-order optimization under local information constraints such as local privacy, gradient quantization, and computational constraints limiting access to a few coordinates of the gradient. In this setting, the optimization algorithm is not allowed to directly access the complete output of the gradient oracle, but only gets limited information about it subject to the local information constraints. We study the role of adaptivity in processing the gradient output to obtain this limited information from it, and obtain tight or nearly tight bounds for both convex and strongly convex optimization when adaptive gradient processing is allowed.
null
Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective
https://papers.nips.cc/paper_files/paper/2021/hash/39ae2ed11b14a4ccb41d35e9d1ba5d11-Abstract.html
Zhengzhuo Xu, Zenghao Chai, Chun Yuan
https://papers.nips.cc/paper_files/paper/2021/hash/39ae2ed11b14a4ccb41d35e9d1ba5d11-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12169-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/39ae2ed11b14a4ccb41d35e9d1ba5d11-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vqzAfN-BoA_
https://papers.nips.cc/paper_files/paper/2021/file/39ae2ed11b14a4ccb41d35e9d1ba5d11-Supplemental.pdf
Real-world data universally confronts a severe class-imbalance problem and exhibits a long-tailed distribution, i.e., most labels are associated with limited instances. The naïve models supervised by such datasets would prefer dominant labels, encounter a serious generalization challenge and become poorly calibrated. We propose two novel methods from the prior perspective to alleviate this dilemma. First, we deduce a balance-oriented data augmentation named Uniform Mixup (UniMix) to promote mixup in long-tailed scenarios, which adopts advanced mixing factor and sampler in favor of the minority. Second, motivated by the Bayesian theory, we figure out the Bayes Bias (Bayias), an inherent bias caused by the inconsistency of prior, and compensate it as a modification on standard cross-entropy loss. We further prove that both the proposed methods ensure the classification calibration theoretically and empirically. Extensive experiments verify that our strategies contribute to a better-calibrated model, and their combination achieves state-of-the-art performance on CIFAR-LT, ImageNet-LT, and iNaturalist 2018.
null
Learning to Draw: Emergent Communication through Sketching
https://papers.nips.cc/paper_files/paper/2021/hash/39d0a8908fbe6c18039ea8227f827023-Abstract.html
Daniela Mihai, Jonathon Hare
https://papers.nips.cc/paper_files/paper/2021/hash/39d0a8908fbe6c18039ea8227f827023-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12170-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/39d0a8908fbe6c18039ea8227f827023-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YIyYkoJX2eA
https://papers.nips.cc/paper_files/paper/2021/file/39d0a8908fbe6c18039ea8227f827023-Supplemental.pdf
Evidence that visual communication preceded written language and provided a basis for it goes back to prehistory, in forms such as cave and rock paintings depicting traces of our distant ancestors. Emergent communication research has sought to explore how agents can learn to communicate in order to collaboratively solve tasks. Existing research has focused on language, with a learned communication channel transmitting sequences of discrete tokens between the agents. In this work, we explore a visual communication channel between agents that are allowed to draw with simple strokes. Our agents are parameterised by deep neural networks, and the drawing procedure is differentiable, allowing for end-to-end training. In the framework of a referential communication game, we demonstrate that agents can not only successfully learn to communicate by drawing, but with appropriate inductive biases, can do so in a fashion that humans can interpret. We hope to encourage future research to consider visual communication as a more flexible and directly interpretable alternative of training collaborative agents.
null
Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/39d4b545fb02556829aab1db805021c3-Abstract.html
Jesse Hagenaars, Federico Paredes-Valles, Guido de Croon
https://papers.nips.cc/paper_files/paper/2021/hash/39d4b545fb02556829aab1db805021c3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12171-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/39d4b545fb02556829aab1db805021c3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MySjw6CHPa4
https://papers.nips.cc/paper_files/paper/2021/file/39d4b545fb02556829aab1db805021c3-Supplemental.pdf
The field of neuromorphic computing promises extremely low-power and low-latency sensing and processing. Challenges in transferring learning algorithms from traditional artificial neural networks (ANNs) to spiking neural networks (SNNs) have so far prevented their application to large-scale, complex regression tasks. Furthermore, realizing a truly asynchronous and fully neuromorphic pipeline that maximally attains the abovementioned benefits involves rethinking the way in which this pipeline takes in and accumulates information. In the case of perception, spikes would be passed as-is and one-by-one between an event camera and an SNN, meaning all temporal integration of information must happen inside the network. In this article, we tackle these two problems. We focus on the complex task of learning to estimate optical flow from event-based camera inputs in a self-supervised manner, and modify the state-of-the-art ANN training pipeline to encode minimal temporal information in its inputs. Moreover, we reformulate the self-supervised loss function for event-based optical flow to improve its convexity. We perform experiments with various types of recurrent ANNs and SNNs using the proposed pipeline. Concerning SNNs, we investigate the effects of elements such as parameter initialization and optimization, surrogate gradient shape, and adaptive neuronal mechanisms. We find that initialization and surrogate gradient width play a crucial part in enabling learning with sparse inputs, while the inclusion of adaptivity and learnable neuronal parameters can improve performance. We show that the performance of the proposed ANNs and SNNs are on par with that of the current state-of-the-art ANNs trained in a self-supervised manner.
null
On the Value of Infinite Gradients in Variational Autoencoder Models
https://papers.nips.cc/paper_files/paper/2021/hash/3a15c7d0bbe60300a39f76f8a5ba6896-Abstract.html
Bin Dai, Li Wenliang, David Wipf
https://papers.nips.cc/paper_files/paper/2021/hash/3a15c7d0bbe60300a39f76f8a5ba6896-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12172-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3a15c7d0bbe60300a39f76f8a5ba6896-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YAv9enSDW-a
null
A number of recent studies of continuous variational autoencoder (VAE) models have noted, either directly or indirectly, the tendency of various parameter gradients to drift towards infinity during training. Because such gradients could potentially contribute to numerical instabilities, and are often framed as a problematic phenomena to be avoided, it may be tempting to shift to alternative energy functions that guarantee bounded gradients. But it remains an open question: What might the unintended consequences of such a restriction be? To address this issue, we examine how unbounded gradients relate to the regularization of a broad class of autoencoder-based architectures, including VAE models, as applied to data lying on or near a low-dimensional manifold (e.g., natural images). Our main finding is that, if the ultimate goal is to simultaneously avoid over-regularization (high reconstruction errors, sometimes referred to as posterior collapse) and under-regularization (excessive latent dimensions are not pruned from the model), then an autoencoder-based energy function with infinite gradients around optimal representations is provably required per a certain technical sense which we carefully detail. Given that both over- and under-regularization can directly lead to poor generated sample quality or suboptimal feature selection, this result suggests that heuristic modifications to or constraints on the VAE energy function may at times be ill-advised, and large gradients should be accommodated to the extent possible.
null
Online Robust Reinforcement Learning with Model Uncertainty
https://papers.nips.cc/paper_files/paper/2021/hash/3a4496776767aaa99f9804d0905fe584-Abstract.html
Yue Wang, Shaofeng Zou
https://papers.nips.cc/paper_files/paper/2021/hash/3a4496776767aaa99f9804d0905fe584-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12173-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3a4496776767aaa99f9804d0905fe584-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=IhiU6AJYpDs
https://papers.nips.cc/paper_files/paper/2021/file/3a4496776767aaa99f9804d0905fe584-Supplemental.pdf
Robust reinforcement learning (RL) is to find a policy that optimizes the worst-case performance over an uncertainty set of MDPs. In this paper, we focus on model-free robust RL, where the uncertainty set is defined to be centering at a misspecified MDP that generates samples, and is assumed to be unknown. We develop a sample-based approach to estimate the unknown uncertainty set, and design robust Q-learning algorithm (tabular case) and robust TDC algorithm (function approximation setting), which can be implemented in an online and incremental fashion. For the robust Q-learning algorithm, we prove that it converges to the optimal robust Q function, and for the robust TDC algorithm, we prove that it converges asymptotically to some stationary points. Unlike the results in [Roy et al., 2017], our algorithms do not need any additional conditions on the discount factor to guarantee the convergence. We further characterize the finite-time error bounds of the two algorithms, and show that both the robust Q-learning and robust TDC algorithms converge as fast as their vanilla counterparts (within a constant factor). Our numerical experiments further demonstrate the robustness of our algorithms. Our approach can be readily extended to robustify many other algorithms, e.g., TD, SARSA, and other GTD algorithms.
null
Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose
https://papers.nips.cc/paper_files/paper/2021/hash/3a61ed715ee66c48bacf237fa7bb5289-Abstract.html
Angtian Wang, Shenxiao Mei, Alan L. Yuille, Adam Kortylewski
https://papers.nips.cc/paper_files/paper/2021/hash/3a61ed715ee66c48bacf237fa7bb5289-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12174-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3a61ed715ee66c48bacf237fa7bb5289-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=JhCcUMFEq7
https://papers.nips.cc/paper_files/paper/2021/file/3a61ed715ee66c48bacf237fa7bb5289-Supplemental.zip
We study the problem of learning to estimate the 3D object pose from a few labelled examples and a collection of unlabelled data. Our main contribution is a learning framework, neural view synthesis and matching, that can transfer the 3D pose annotation from the labelled to unlabelled images reliably, despite unseen 3D views and nuisance variations such as the object shape, texture, illumination or scene context. In our approach, objects are represented as 3D cuboid meshes composed of feature vectors at each mesh vertex. The model is initialized from a few labelled images and is subsequently used to synthesize feature representations of unseen 3D views. The synthesized views are matched with the feature representations of unlabelled images to generate pseudo-labels of the 3D pose. The pseudo-labelled data is, in turn, used to train the feature extractor such that the features at each mesh vertex are more invariant across varying 3D views of the object. Our model is trained in an EM-type manner alternating between increasing the 3D pose invariance of the feature extractor and annotating unlabelled data through neural view synthesis and matching. We demonstrate the effectiveness of the proposed semi-supervised learning framework for 3D pose estimation on the PASCAL3D+ and KITTI datasets. We find that our approach outperforms all baselines by a wide margin, particularly in an extreme few-shot setting where only 7 annotated images are given. Remarkably, we observe that our model also achieves an exceptional robustness in out-of-distribution scenarios that involve partial occlusion.
null
Sharp Impossibility Results for Hyper-graph Testing
https://papers.nips.cc/paper_files/paper/2021/hash/3b24156ad560a696116454056bc88ab4-Abstract.html
Jiashun Jin, Zheng Tracy Ke, Jiajun Liang
https://papers.nips.cc/paper_files/paper/2021/hash/3b24156ad560a696116454056bc88ab4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12175-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3b24156ad560a696116454056bc88ab4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bYIddUC7AYO
null
In a broad Degree-Corrected Mixed-Membership (DCMM) setting, we test whether a non-uniform hypergraph has only one community or has multiple communities. Since both the null and alternative hypotheses have many unknown parameters, the challenge is, given an alternative, how to identify the null that is hardest to separate from the alternative. We approach this by proposing a degree matching strategy where the main idea is leveraging the theory for tensor scaling to create a least favorable pair of hypotheses. We present a result on standard minimax lower bound theory and a result on Region of Impossibility (which is more informative than the minimax lower bound). We show that our lower bounds are tight by introducing a new test that attains the lower bound up to a logarithmic factor. We also discuss the case where the hypergraphs may have mixed-memberships.
null
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
https://papers.nips.cc/paper_files/paper/2021/hash/3b3fff6463464959dcd1b68d0320f781-Abstract.html
Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, Sanjeev Arora
https://papers.nips.cc/paper_files/paper/2021/hash/3b3fff6463464959dcd1b68d0320f781-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12176-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3b3fff6463464959dcd1b68d0320f781-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0CDKgyYaxC8
https://papers.nips.cc/paper_files/paper/2021/file/3b3fff6463464959dcd1b68d0320f781-Supplemental.pdf
Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of Federated learning, whereby malicious eavesdroppers or participants in the protocol can recover (partially) the clients' private data. This paper evaluates existing attacks and defenses. We find that some attacks make strong assumptions about the setup. Relaxing such assumptions can substantially weaken these attacks. We then evaluate the benefits of three proposed defense mechanisms against gradient inversion attacks. We show the trade-offs of privacy leakage and data utility of these defense methods, and find that combining them in an appropriate manner makes the attack less effective, even under the original strong assumptions. We also estimate the computation cost of end-to-end recovery of a single image under each evaluated defense. Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss, as summarized in a list of potential strategies.
null
Faster Non-asymptotic Convergence for Double Q-learning
https://papers.nips.cc/paper_files/paper/2021/hash/3b712de48137572f3849aabd5666a4e3-Abstract.html
Lin Zhao, Huaqing Xiong, Yingbin Liang
https://papers.nips.cc/paper_files/paper/2021/hash/3b712de48137572f3849aabd5666a4e3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12177-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3b712de48137572f3849aabd5666a4e3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sZu0b4WrElD
null
Double Q-learning (Hasselt, 2010) has gained significant success in practice due to its effectiveness in overcoming the overestimation issue of Q-learning. However, the theoretical understanding of double Q-learning is rather limited. The only existing finite-time analysis was recently established in (Xiong et al. 2020), where the polynomial learning rate adopted in the analysis typically yields a slower convergence rate. This paper tackles the more challenging case of a constant learning rate, and develops new analytical tools that improve the existing convergence rate by orders of magnitude. Specifically, we show that synchronous double Q-learning attains an $\epsilon$-accurate global optimum with a time complexity of $\tilde{\Omega}\left(\frac{\ln D}{(1-\gamma)^7\epsilon^2} \right)$, and the asynchronous algorithm achieves a time complexity of $\tilde{\Omega}\left(\frac{L}{(1-\gamma)^7\epsilon^2} \right)$, where $D$ is the cardinality of the state-action space, $\gamma$ is the discount factor, and $L$ is a parameter related to the sampling strategy for asynchronous double Q-learning. These results improve the existing convergence rate by the order of magnitude in terms of its dependence on all major parameters $(\epsilon,1-\gamma, D, L)$. This paper presents a substantial step toward the full understanding of the fast convergence of double-Q learning.
null
Towards Tight Communication Lower Bounds for Distributed Optimisation
https://papers.nips.cc/paper_files/paper/2021/hash/3b92d18aa7a6176dd37d372bc2f1eb71-Abstract.html
Janne H. Korhonen, Dan Alistarh
https://papers.nips.cc/paper_files/paper/2021/hash/3b92d18aa7a6176dd37d372bc2f1eb71-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12178-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3b92d18aa7a6176dd37d372bc2f1eb71-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=86iCmraCBL
https://papers.nips.cc/paper_files/paper/2021/file/3b92d18aa7a6176dd37d372bc2f1eb71-Supplemental.pdf
We consider a standard distributed optimisation setting where $N$ machines, each holding a $d$-dimensional function $f_i$, aim to jointly minimise the sum of the functions $\sum_{i = 1}^N f_i (x)$. This problem arises naturally in large-scale distributed optimisation, where a standard solution is to apply variants of (stochastic) gradient descent. We focus on the communication complexity of this problem: our main result provides the first fully unconditional bounds on total number of bits which need to be sent and received by the $N$ machines to solve this problem under point-to-point communication, within a given error-tolerance. Specifically, we show that $\Omega( Nd \log d / N\varepsilon)$ total bits need to be communicated between the machines to find an additive $\epsilon$-approximation to the minimum of $\sum_{i = 1}^N f_i (x)$. The result holds for both deterministic and randomised algorithms, and, importantly, requires no assumptions on the algorithm structure. The lower bound is tight under certain restrictions on parameter values, and is matched within constant factors for quadratic objectives by a new variant of quantised gradient descent, which we describe and analyse. Our results bring over tools from communication complexity to distributed optimisation, which has potential for further applications.
null
Fast Multi-Resolution Transformer Fine-tuning for Extreme Multi-label Text Classification
https://papers.nips.cc/paper_files/paper/2021/hash/3bbca1d243b01b47c2bf42b29a8b265c-Abstract.html
Jiong Zhang, Wei-Cheng Chang, Hsiang-Fu Yu, Inderjit Dhillon
https://papers.nips.cc/paper_files/paper/2021/hash/3bbca1d243b01b47c2bf42b29a8b265c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12179-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3bbca1d243b01b47c2bf42b29a8b265c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gjBz22V93a
https://papers.nips.cc/paper_files/paper/2021/file/3bbca1d243b01b47c2bf42b29a8b265c-Supplemental.pdf
Extreme multi-label text classification~(XMC) seeks to find relevant labels from an extreme large label collection for a given text input. Many real-world applications can be formulated as XMC problems, such as recommendation systems, document tagging and semantic search. Recently, transformer based XMC methods, such as X-Transformer and LightXML, have shown significant improvement over other XMC methods. Despite leveraging pre-trained transformer models for text representation, the fine-tuning procedure of transformer models on large label space still has lengthy computational time even with powerful GPUs. In this paper, we propose a novel recursive approach, XR-Transformer to accelerate the procedure through recursively fine-tuning transformer models on a series of multi-resolution objectives related to the original XMC objective function. Empirical results show that XR-Transformer takes significantly less training time compared to other transformer-based XMC models while yielding better state-of-the-art results. In particular, on the public Amazon-3M dataset with 3 million labels, XR-Transformer is not only 20x faster than X-Transformer but also improves the Precision@1 from 51% to 54%.
null
HRFormer: High-Resolution Vision Transformer for Dense Predict
https://papers.nips.cc/paper_files/paper/2021/hash/3bbfdde8842a5c44a0323518eec97cbe-Abstract.html
YUHUI YUAN, Rao Fu, Lang Huang, Weihong Lin, Chao Zhang, Xilin Chen, Jingdong Wang
https://papers.nips.cc/paper_files/paper/2021/hash/3bbfdde8842a5c44a0323518eec97cbe-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12180-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3bbfdde8842a5c44a0323518eec97cbe-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DF8LCjR03tX
https://papers.nips.cc/paper_files/paper/2021/file/3bbfdde8842a5c44a0323518eec97cbe-Supplemental.pdf
We present a High-Resolution Transformer (HRFormer) that learns high-resolution representations for dense prediction tasks, in contrast to the original Vision Transformer that produces low-resolution representations and has high memory and computational cost. We take advantage of the multi-resolution parallel design introduced in high-resolution convolutional networks (HRNet [45]), along with local-window self-attention that performs self-attention over small non-overlapping image windows [21], for improving the memory and computation efficiency. In addition, we introduce a convolution into the FFN to exchange information across the disconnected image windows. We demonstrate the effectiveness of the HighResolution Transformer on both human pose estimation and semantic segmentation tasks, e.g., HRFormer outperforms Swin transformer [27] by 1.3 AP on COCO pose estimation with 50% fewer parameters and 30% fewer FLOPs. Code is available at: https://github.com/HRNet/HRFormer
null
Manifold Topology Divergence: a Framework for Comparing Data Manifolds.
https://papers.nips.cc/paper_files/paper/2021/hash/3bc31a430954d8326605fc690ed22f4d-Abstract.html
Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, Evgeny Burnaev
https://papers.nips.cc/paper_files/paper/2021/hash/3bc31a430954d8326605fc690ed22f4d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12181-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3bc31a430954d8326605fc690ed22f4d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Fj6kQJbHwM9
https://papers.nips.cc/paper_files/paper/2021/file/3bc31a430954d8326605fc690ed22f4d-Supplemental.pdf
We propose a framework for comparing data manifolds, aimed, in particular, towards the evaluation of deep generative models. We describe a novel tool, Cross-Barcode(P,Q), that, given a pair of distributions in a high-dimensional space, tracks multiscale topology spacial discrepancies between manifolds on which the distributions are concentrated. Based on the Cross-Barcode, we introduce the Manifold Topology Divergence score (MTop-Divergence) and apply it to assess the performance of deep generative models in various domains: images, 3D-shapes, time-series, and on different datasets: MNIST, Fashion MNIST, SVHN, CIFAR10, FFHQ, market stock data, ShapeNet. We demonstrate that the MTop-Divergence accurately detects various degrees of mode-dropping, intra-mode collapse, mode invention, and image disturbance. Our algorithm scales well (essentially linearly) with the increase of the dimension of the ambient high-dimensional space. It is one of the first TDA-based methodologies that can be applied universally to datasets of different sizes and dimensions, including the ones on which the most recent GANs in the visual domain are trained. The proposed method is domain agnostic and does not rely on pre-trained networks.
null
Weak-shot Fine-grained Classification via Similarity Transfer
https://papers.nips.cc/paper_files/paper/2021/hash/3bd4017318837e92a66298c7855f4427-Abstract.html
Junjie Chen, Li Niu, Liu Liu, Liqing Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/3bd4017318837e92a66298c7855f4427-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12182-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3bd4017318837e92a66298c7855f4427-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vrXuRmaU_jM
https://papers.nips.cc/paper_files/paper/2021/file/3bd4017318837e92a66298c7855f4427-Supplemental.pdf
Recognizing fine-grained categories remains a challenging task, due to the subtle distinctions among different subordinate categories, which results in the need of abundant annotated samples. To alleviate the data-hungry problem, we consider the problem of learning novel categories from web data with the support of a clean set of base categories, which is referred to as weak-shot learning. In this setting, we propose a method called SimTrans to transfer pairwise semantic similarity from base categories to novel categories. Specifically, we firstly train a similarity net on clean data, and then leverage the transferred similarity to denoise web training data using two simple yet effective strategies. In addition, we apply adversarial loss on similarity net to enhance the transferability of similarity. Comprehensive experiments demonstrate the effectiveness of our weak-shot setting and our SimTrans method.
null
Shape your Space: A Gaussian Mixture Regularization Approach to Deterministic Autoencoders
https://papers.nips.cc/paper_files/paper/2021/hash/3c057cb2b41f22c0e740974d7a428918-Abstract.html
Amrutha Saseendran, Kathrin Skubch, Stefan Falkner, Margret Keuper
https://papers.nips.cc/paper_files/paper/2021/hash/3c057cb2b41f22c0e740974d7a428918-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12183-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3c057cb2b41f22c0e740974d7a428918-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WybjTtCKfGi
https://papers.nips.cc/paper_files/paper/2021/file/3c057cb2b41f22c0e740974d7a428918-Supplemental.pdf
Variational Autoencoders (VAEs) are powerful probabilistic models to learn representations of complex data distributions. One important limitation of VAEs is the strong prior assumption that latent representations learned by the model follow a simple uni-modal Gaussian distribution. Further, the variational training procedure poses considerable practical challenges. Recently proposed regularized autoencoders offer a deterministic autoencoding framework, that simplifies the original VAE objective and is significantly easier to train. Since these models only provide weak control over the learned latent distribution, they require an ex-post density estimation step to generate samples comparable to those of VAEs. In this paper, we propose a simple and end-to-end trainable deterministic autoencoding framework, that efficiently shapes the latent space of the model during training and utilizes the capacity of expressive multi-modal latent distributions. The proposed training procedure provides direct evidence if the latent distribution adequately captures complex aspects of the encoded data. We show in experiments the expressiveness and sample quality of our model in various challenging continuous and discrete domains. An implementation is available at https://github.com/boschresearch/GMM_DAE.
null
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning
https://papers.nips.cc/paper_files/paper/2021/hash/3c63ec7be1b6c49e6c308397023fd8cd-Abstract.html
Blake E. Woodworth, Nathan Srebro
https://papers.nips.cc/paper_files/paper/2021/hash/3c63ec7be1b6c49e6c308397023fd8cd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12184-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3c63ec7be1b6c49e6c308397023fd8cd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iBHiqlbFvLb
https://papers.nips.cc/paper_files/paper/2021/file/3c63ec7be1b6c49e6c308397023fd8cd-Supplemental.pdf
We present and analyze an algorithm for optimizing smooth and convex or strongly convex objectives using minibatch stochastic gradient estimates. The algorithm is optimal with respect to its dependence on both the minibatch size and minimum expected loss simultaneously. This improves over the optimal method of Lan, which is insensitive to the minimum expected loss; over the optimistic acceleration of Cotter et al., which has suboptimal dependence on the minibatch size; and over the algorithm of Liu and Belkin, which is limited to least squares problems and is also similarly suboptimal. Applied to interpolation learning, the improvement over Cotter et al.~and Liu and Belkin translates to a linear, rather than square-root, parallelization speedup.
null
Indexed Minimum Empirical Divergence for Unimodal Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/3c88c1db16b9523b4dcdcd572aa1e16a-Abstract.html
Hassan SABER, Pierre Ménard, Odalric-Ambrym Maillard
https://papers.nips.cc/paper_files/paper/2021/hash/3c88c1db16b9523b4dcdcd572aa1e16a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12185-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3c88c1db16b9523b4dcdcd572aa1e16a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5J9sbGwZ9bC
https://papers.nips.cc/paper_files/paper/2021/file/3c88c1db16b9523b4dcdcd572aa1e16a-Supplemental.pdf
We consider a stochastic multi-armed bandit problem specified by a set of one-dimensional family exponential distributions endowed with a unimodal structure. The unimodal structure is of practical relevance for several applications. We introduce IMED-UB, an algorithm that exploits provably optimally the unimodal-structure, by adapting to this setting the Indexed Minimum Empirical Divergence (IMED) algorithm introduced by Honda and Takemura (2015). Owing to our proof technique, we are able to provide a concise finite-time analysis of the IMED-UB algorithm, that is simple and yet yields asymptotic optimality. We finally provide numerical experiments showing that IMED-UB competes favorably with the recently introduced state-of-the-art algorithms.
null
SOAT: A Scene- and Object-Aware Transformer for Vision-and-Language Navigation
https://papers.nips.cc/paper_files/paper/2021/hash/3c8a49145944fed2bbcaade178a426c4-Abstract.html
Abhinav Moudgil, Arjun Majumdar, Harsh Agrawal, Stefan Lee, Dhruv Batra
https://papers.nips.cc/paper_files/paper/2021/hash/3c8a49145944fed2bbcaade178a426c4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12186-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3c8a49145944fed2bbcaade178a426c4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=E5EoQqCVYX
https://papers.nips.cc/paper_files/paper/2021/file/3c8a49145944fed2bbcaade178a426c4-Supplemental.pdf
Natural language instructions for visual navigation often use scene descriptions (e.g., bedroom) and object references (e.g., green chairs) to provide a breadcrumb trail to a goal location. This work presents a transformer-based vision-and-language navigation (VLN) agent that uses two different visual encoders -- a scene classification network and an object detector -- which produce features that match these two distinct types of visual cues. In our method, scene features contribute high-level contextual information that supports object-level processing. With this design, our model is able to use vision-and-language pretraining (i.e., learning the alignment between images and text from large-scale web data) to substantially improve performance on the Room-to-Room (R2R) and Room-Across-Room (RxR) benchmarks. Specifically, our approach leads to improvements of 1.8% absolute in SPL on R2R and 3.7% absolute in SR on RxR. Our analysis reveals even larger gains for navigation instructions that contain six or more object references, which further suggests that our approach is better able to use object features and align them to references in the instructions.
null
A Normative and Biologically Plausible Algorithm for Independent Component Analysis
https://papers.nips.cc/paper_files/paper/2021/hash/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Abstract.html
Yanis Bahroun, Dmitri Chklovskii, Anirvan Sengupta
https://papers.nips.cc/paper_files/paper/2021/hash/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12187-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fpvUKdqcPV
https://papers.nips.cc/paper_files/paper/2021/file/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Supplemental.pdf
The brain effortlessly solves blind source separation (BSS) problems, but the algorithm it uses remains elusive. In signal processing, linear BSS problems are often solved by Independent Component Analysis (ICA). To serve as a model of a biological circuit, the ICA neural network (NN) must satisfy at least the following requirements: 1. The algorithm must operate in the online setting where data samples are streamed one at a time, and the NN computes the sources on the fly without storing any significant fraction of the data in memory. 2. The synaptic weight update is local, i.e., it depends only on the biophysical variables present in the vicinity of a synapse. Here, we propose a novel objective function for ICA from which we derive a biologically plausible NN, including both the neural architecture and the synaptic learning rules. Interestingly, our algorithm relies on modulating synaptic plasticity by the total activity of the output neurons. In the brain, this could be accomplished by neuromodulators, extracellular calcium, local field potential, or nitric oxide.
null
Regret Bounds for Gaussian-Process Optimization in Large Domains
https://papers.nips.cc/paper_files/paper/2021/hash/3cec07e9ba5f5bb252d13f5f431e4bbb-Abstract.html
Manuel Wuethrich, Bernhard Schölkopf, Andreas Krause
https://papers.nips.cc/paper_files/paper/2021/hash/3cec07e9ba5f5bb252d13f5f431e4bbb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12188-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3cec07e9ba5f5bb252d13f5f431e4bbb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=eb0angdXVfR
https://papers.nips.cc/paper_files/paper/2021/file/3cec07e9ba5f5bb252d13f5f431e4bbb-Supplemental.pdf
The goal of this paper is to characterize Gaussian-Process optimization in the setting where the function domain is large relative to the number of admissible function evaluations, i.e., where it is impossible to find the global optimum. We provide upper bounds on the suboptimality (Bayesian simple regret) of the solution found by optimization strategies that are closely related to the widely used expected improvement (EI) and upper confidence bound (UCB) algorithms. These regret bounds illuminate the relationship between the number of evaluations, the domain size (i.e. cardinality of finite domains / Lipschitz constant of the covariance function in continuous domains), and the optimality of the retrieved function value.In particular, we show that even when the number of evaluations is far too small to find the global optimum, we can find nontrivial function values (e.g. values that achieve a certain ratio with the optimal value).
null
Deeply Shared Filter Bases for Parameter-Efficient Convolutional Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/3cf2559725a9fdfa602ec8c887440f32-Abstract.html
Woochul Kang, Daeyeon Kim
https://papers.nips.cc/paper_files/paper/2021/hash/3cf2559725a9fdfa602ec8c887440f32-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12189-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3cf2559725a9fdfa602ec8c887440f32-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Z2LtauFNu2r
https://papers.nips.cc/paper_files/paper/2021/file/3cf2559725a9fdfa602ec8c887440f32-Supplemental.pdf
Modern convolutional neural networks (CNNs) have massive identical convolution blocks, and, hence, recursive sharing of parameters across these blocks has been proposed to reduce the amount of parameters. However, naive sharing of parameters poses many challenges such as limited representational power and the vanishing/exploding gradients problem of recursively shared parameters. In this paper, we present a recursive convolution block design and training method, in which a recursively shareable part, or a filter basis, is separated and learned while effectively avoiding the vanishing/exploding gradients problem during training. We show that the unwieldy vanishing/exploding gradients problem can be controlled by enforcing the elements of the filter basis orthonormal, and empirically demonstrate that the proposed orthogonality regularization improves the flow of gradients during training. Experimental results on image classification and object detection show that our approach, unlike previous parameter-sharing approaches, does not trade performance to save parameters and consistently outperforms over parameterized counterpart networks. This superior performance demonstrates that the proposed recursive convolution block design and the orthogonality regularization not only prevent performance degradation, but also consistently improve the representation capability while a significant amount of parameters are recursively shared.
null
On Optimal Robustness to Adversarial Corruption in Online Decision Problems
https://papers.nips.cc/paper_files/paper/2021/hash/3d191ef6e236bd1b9bdb9ff4743c47fe-Abstract.html
Shinji Ito
https://papers.nips.cc/paper_files/paper/2021/hash/3d191ef6e236bd1b9bdb9ff4743c47fe-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12190-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3d191ef6e236bd1b9bdb9ff4743c47fe-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iCoK73Q9TW2
https://papers.nips.cc/paper_files/paper/2021/file/3d191ef6e236bd1b9bdb9ff4743c47fe-Supplemental.pdf
This paper considers two fundamental sequential decision-making problems: the problem of prediction with expert advice and the multi-armed bandit problem. We focus on stochastic regimes in which an adversary may corrupt losses, and we investigate what level of robustness can be achieved against adversarial corruption. The main contribution of this paper is to show that optimal robustness can be expressed by a square-root dependency on the amount of corruption. More precisely, we show that two classes of algorithms, anytime Hedge with decreasing learning rate and algorithms with second-order regret bounds, achieve $O( \frac{\log N}{\Delta} + \sqrt{ \frac{C \log N }{\Delta} } )$-regret, where $N, \Delta$, and $C$ represent the number of experts, the gap parameter, and the corruption level, respectively. We further provide a matching lower bound, which means that this regret bound is tight up to a constant factor. For the multi-armed bandit problem, we also provide a nearly-tight lower bound up to a logarithmic factor.
null
Directed Spectrum Measures Improve Latent Network Models Of Neural Populations
https://papers.nips.cc/paper_files/paper/2021/hash/3d36c07721a0a5a96436d6c536a132ec-Abstract.html
Neil Gallagher, Kafui Dzirasa, David Carlson
https://papers.nips.cc/paper_files/paper/2021/hash/3d36c07721a0a5a96436d6c536a132ec-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12191-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3d36c07721a0a5a96436d6c536a132ec-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AhlzUugOFIo
https://papers.nips.cc/paper_files/paper/2021/file/3d36c07721a0a5a96436d6c536a132ec-Supplemental.pdf
Systems neuroscience aims to understand how networks of neurons distributed throughout the brain mediate computational tasks. One popular approach to identify those networks is to first calculate measures of neural activity (e.g. power spectra) from multiple brain regions, and then apply a linear factor model to those measures. Critically, despite the established role of directed communication between brain regions in neural computation, measures of directed communication have been rarely utilized in network estimation because they are incompatible with the implicit assumptions of the linear factor model approach. Here, we develop a novel spectral measure of directed communication called the Directed Spectrum (DS). We prove that it is compatible with the implicit assumptions of linear factor models, and we provide a method to estimate the DS. We demonstrate that latent linear factor models of DS measures better capture underlying brain networks in both simulated and real neural recording data compared to available alternatives. Thus, linear factor models of the Directed Spectrum offer neuroscientists a simple and effective way to explicitly model directed communication in networks of neural populations.
null
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
https://papers.nips.cc/paper_files/paper/2021/hash/3d3d286a8d153a4a58156d0e02d8570c-Abstract.html
Gaon An, Seungyong Moon, Jang-Hyun Kim, Hyun Oh Song
https://papers.nips.cc/paper_files/paper/2021/hash/3d3d286a8d153a4a58156d0e02d8570c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12192-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3d3d286a8d153a4a58156d0e02d8570c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZUvaSolQZh3
https://papers.nips.cc/paper_files/paper/2021/file/3d3d286a8d153a4a58156d0e02d8570c-Supplemental.pdf
Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.
null
Distribution-free inference for regression: discrete, continuous, and in between
https://papers.nips.cc/paper_files/paper/2021/hash/3d4893419e57449fb290647149f738d4-Abstract.html
Yonghoon Lee, Rina Barber
https://papers.nips.cc/paper_files/paper/2021/hash/3d4893419e57449fb290647149f738d4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12193-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3d4893419e57449fb290647149f738d4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=q88AMOYEKLa
https://papers.nips.cc/paper_files/paper/2021/file/3d4893419e57449fb290647149f738d4-Supplemental.pdf
In data analysis problems where we are not able to rely on distributional assumptions, what types of inference guarantees can still be obtained? Many popular methods, such as holdout methods, cross-validation methods, and conformal prediction, are able to provide distribution-free guarantees for predictive inference, but the problem of providing inference for the underlying regression function (for example, inference on the conditional mean $\mathbb{E}[Y|X]$) is more challenging. In the setting where the features $X$ are continuously distributed, recent work has established that any confidence interval for $\mathbb{E}[Y|X]$ must have non-vanishing width, even as sample size tends to infinity. At the other extreme, if $X$ takes only a small number of possible values, then inference on $\mathbb{E}[Y|X]$ is trivial to achieve. In this work, we study the problem in settings in between these two extremes. We find that there are several distinct regimes in between the finite setting and the continuous setting, where vanishing-width confidence intervals are achievable if and only if the effective support size of the distribution of $X$ is smaller than the square of the sample size.
null
Statistical Inference with M-Estimators on Adaptively Collected Data
https://papers.nips.cc/paper_files/paper/2021/hash/3d7d9461075eb7c37fbbfcad1d7042c1-Abstract.html
Kelly Zhang, Lucas Janson, Susan Murphy
https://papers.nips.cc/paper_files/paper/2021/hash/3d7d9461075eb7c37fbbfcad1d7042c1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12194-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3d7d9461075eb7c37fbbfcad1d7042c1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=TJOQw_vMlAj
https://papers.nips.cc/paper_files/paper/2021/file/3d7d9461075eb7c37fbbfcad1d7042c1-Supplemental.pdf
Bandit algorithms are increasingly used in real-world sequential decision-making problems. Associated with this is an increased desire to be able to use the resulting datasets to answer scientific questions like: Did one type of ad lead to more purchases? In which contexts is a mobile health intervention effective? However, classical statistical approaches fail to provide valid confidence intervals when used with data collected with bandit algorithms. Alternative methods have recently been developed for simple models (e.g., comparison of means). Yet there is a lack of general methods for conducting statistical inference using more complex models on data collected with (contextual) bandit algorithms; for example, current methods cannot be used for valid inference on parameters in a logistic regression model for a binary reward. In this work, we develop theory justifying the use of M-estimators---which includes estimators based on empirical risk minimization as well as maximum likelihood---on data collected with adaptive algorithms, including (contextual) bandit algorithms. Specifically, we show that M-estimators, modified with particular adaptive weights, can be used to construct asymptotically valid confidence regions for a variety of inferential targets.
null
NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem
https://papers.nips.cc/paper_files/paper/2021/hash/3d863b367aa379f71c7afc0c9cdca41d-Abstract.html
Liang Xin, Wen Song, Zhiguang Cao, Jie Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/3d863b367aa379f71c7afc0c9cdca41d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12195-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3d863b367aa379f71c7afc0c9cdca41d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VKVShLsAuZ
https://papers.nips.cc/paper_files/paper/2021/file/3d863b367aa379f71c7afc0c9cdca41d-Supplemental.pdf
We present NeuroLKH, a novel algorithm that combines deep learning with the strong traditional heuristic Lin-Kernighan-Helsgaun (LKH) for solving Traveling Salesman Problem. Specifically, we train a Sparse Graph Network (SGN) with supervised learning for edge scores and unsupervised learning for node penalties, both of which are critical for improving the performance of LKH. Based on the output of SGN, NeuroLKH creates the edge candidate set and transforms edge distances to guide the searching process of LKH. Extensive experiments firmly demonstrate that, by training one model on a wide range of problem sizes, NeuroLKH significantly outperforms LKH and generalizes well to much larger sizes. Also, we show that NeuroLKH can be applied to other routing problems such as Capacitated Vehicle Routing Problem (CVRP), Pickup and Delivery Problem (PDP), and CVRP with Time Windows (CVRPTW).
null
LSH-SMILE: Locality Sensitive Hashing Accelerated Simulation and Learning
https://papers.nips.cc/paper_files/paper/2021/hash/3d98b79ac6c8d1cef43d7bf1dadf8647-Abstract.html
Chonghao Sima, Yexiang Xue
https://papers.nips.cc/paper_files/paper/2021/hash/3d98b79ac6c8d1cef43d7bf1dadf8647-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12196-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3d98b79ac6c8d1cef43d7bf1dadf8647-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Pf9RjFoUdLZ
https://papers.nips.cc/paper_files/paper/2021/file/3d98b79ac6c8d1cef43d7bf1dadf8647-Supplemental.zip
The advancement of deep neural networks over the last decade has enabled progress in scientific knowledge discovery in the form of learning Partial Differential Equations (PDEs) directly from experiment data. Nevertheless, forward simulation and backward learning of large-scale dynamic systems require handling billions of mutually interacting elements, the scale of which overwhelms current computing architectures. We propose Locality Sensitive Hashing Accelerated Simulation and Learning (LSH-SMILE), a unified framework to scale up both forward simulation and backward learning of physics systems. LSH-SMILE takes advantage of (i) the locality of PDE updates, (ii) similar temporal dynamics shared by multiple elements. LSH-SMILE hashes elements with similar dynamics into a single hash bucket and handles their updates at once. This allows LSH-SMILE to scale with respect to the number of non-empty hash buckets, a drastic improvement over conventional approaches. Theoretically, we prove a novel bound on the errors introduced by LSH-SMILE. Experimentally, we demonstrate that LSH-SMILE simulates physics systems at comparable quality with exact approaches, but with way less time and space complexity. Such savings also translate to better learning performance due to LSH-SMILE's ability to propagate gradients over a long duration.
null
Meta-learning with an Adaptive Task Scheduler
https://papers.nips.cc/paper_files/paper/2021/hash/3dc4876f3f08201c7c76cb71fa1da439-Abstract.html
Huaxiu Yao, Yu Wang, Ying Wei, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
https://papers.nips.cc/paper_files/paper/2021/hash/3dc4876f3f08201c7c76cb71fa1da439-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12197-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3dc4876f3f08201c7c76cb71fa1da439-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MTs2adH_Qq
https://papers.nips.cc/paper_files/paper/2021/file/3dc4876f3f08201c7c76cb71fa1da439-Supplemental.pdf
To benefit the learning of a new task, meta-learning has been proposed to transfer a well-generalized meta-model learned from various meta-training tasks. Existing meta-learning algorithms randomly sample meta-training tasks with a uniform probability, under the assumption that tasks are of equal importance. However, it is likely that tasks are detrimental with noise or imbalanced given a limited number of meta-training tasks. To prevent the meta-model from being corrupted by such detrimental tasks or dominated by tasks in the majority, in this paper, we propose an adaptive task scheduler (ATS) for the meta-training process. In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks. We identify two meta-model-related factors as the input of the neural scheduler, which characterize the difficulty of a candidate task to the meta-model. Theoretically, we show that a scheduler taking the two factors into account improves the meta-training loss and also the optimization landscape. Under the setting of meta-learning with noise and limited budgets, ATS improves the performance on both miniImageNet and a real-world drug discovery benchmark by up to 13% and 18%, respectively, compared to state-of-the-art task schedulers.
null
Neural Active Learning with Performance Guarantees
https://papers.nips.cc/paper_files/paper/2021/hash/3dcaf04c357c577a857f3ffadc555f9b-Abstract.html
Zhilei Wang, Pranjal Awasthi, Christoph Dann, Ayush Sekhari, Claudio Gentile
https://papers.nips.cc/paper_files/paper/2021/hash/3dcaf04c357c577a857f3ffadc555f9b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12198-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3dcaf04c357c577a857f3ffadc555f9b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iPHnzuU6S94
https://papers.nips.cc/paper_files/paper/2021/file/3dcaf04c357c577a857f3ffadc555f9b-Supplemental.pdf
We investigate the problem of active learning in the streaming setting in non-parametric regimes, where the labels are stochastically generated from a class of functions on which we make no assumptions whatsoever. We rely on recently proposed Neural Tangent Kernel (NTK) approximation tools to construct a suitable neural embedding that determines the feature space the algorithm operates on and the learned model computed atop. Since the shape of the label requesting threshold is tightly related to the complexity of the function to be learned, which is a-priori unknown, we also derive a version of the algorithm which is agnostic to any prior knowledge. This algorithm relies on a regret balancing scheme to solve the resulting online model selection problem, and is computationally efficient. We prove joint guarantees on the cumulative regret and number of requested labels which depend on the complexity of the labeling function at hand. In the linear case, these guarantees recover known minimax results of the generalization error as a function of the label complexity in a standard statistical learning setting.
null
A Gradient Method for Multilevel Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/3de568f8597b94bda53149c7d7f5958c-Abstract.html
Ryo Sato, Mirai Tanaka, Akiko Takeda
https://papers.nips.cc/paper_files/paper/2021/hash/3de568f8597b94bda53149c7d7f5958c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12199-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3de568f8597b94bda53149c7d7f5958c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-sQ1LLWIAAJ
https://papers.nips.cc/paper_files/paper/2021/file/3de568f8597b94bda53149c7d7f5958c-Supplemental.pdf
Although application examples of multilevel optimization have already been discussed since the 1990s, the development of solution methods was almost limited to bilevel cases due to the difficulty of the problem. In recent years, in machine learning, Franceschi et al. have proposed a method for solving bilevel optimization problems by replacing their lower-level problems with the $T$ steepest descent update equations with some prechosen iteration number $T$. In this paper, we have developed a gradient-based algorithm for multilevel optimization with $n$ levels based on their idea and proved that our reformulation asymptotically converges to the original multilevel problem. As far as we know, this is one of the first algorithms with some theoretical guarantee for multilevel optimization. Numerical experiments show that a trilevel hyperparameter learning model considering data poisoning produces more stable prediction results than an existing bilevel hyperparameter learning model in noisy data settings.
null
Edge Representation Learning with Hypergraphs
https://papers.nips.cc/paper_files/paper/2021/hash/3def184ad8f4755ff269862ea77393dd-Abstract.html
Jaehyeong Jo, Jinheon Baek, Seul Lee, Dongki Kim, Minki Kang, Sung Ju Hwang
https://papers.nips.cc/paper_files/paper/2021/hash/3def184ad8f4755ff269862ea77393dd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12200-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3def184ad8f4755ff269862ea77393dd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vwgsqRorzz
null
Graph neural networks have recently achieved remarkable success in representing graph-structured data, with rapid progress in both the node embedding and graph pooling methods. Yet, they mostly focus on capturing information from the nodes considering their connectivity, and not much work has been done in representing the edges, which are essential components of a graph. However, for tasks such as graph reconstruction and generation, as well as graph classification tasks for which the edges are important for discrimination, accurately representing edges of a given graph is crucial to the success of the graph representation learning. To this end, we propose a novel edge representation learning framework based on Dual Hypergraph Transformation (DHT), which transforms the edges of a graph into the nodes of a hypergraph. This dual hypergraph construction allows us to apply message-passing techniques for node representations to edges. After obtaining edge representations from the hypergraphs, we then cluster or drop edges to obtain holistic graph-level edge representations. We validate our edge representation learning method with hypergraphs on diverse graph datasets for graph representation and generation performance, on which our method largely outperforms existing graph representation learning methods. Moreover, our edge representation learning and pooling method also largely outperforms state-of-the-art graph pooling methods on graph classification, not only because of its accurate edge representation learning, but also due to its lossless compression of the nodes and removal of irrelevant edges for effective message-passing.
null
One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval
https://papers.nips.cc/paper_files/paper/2021/hash/3df07fdae1ab273a967aaa1d355b8bb6-Abstract.html
Akari Asai, Xinyan Yu, Jungo Kasai, Hanna Hajishirzi
https://papers.nips.cc/paper_files/paper/2021/hash/3df07fdae1ab273a967aaa1d355b8bb6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12201-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3df07fdae1ab273a967aaa1d355b8bb6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=e8blYRui3j
https://papers.nips.cc/paper_files/paper/2021/file/3df07fdae1ab273a967aaa1d355b8bb6-Supplemental.pdf
We present Cross-lingual Open-Retrieval Answer Generation (CORA), the first unified many-to-many question answering (QA) model that can answer questions across many languages, even for ones without language-specific annotated data or knowledge sources.We introduce a new dense passage retrieval algorithm that is trained to retrieve documents across languages for a question.Combined with a multilingual autoregressive generation model, CORA answers directly in the target language without any translation or in-language retrieval modules as used in prior work. We propose an iterative training method that automatically extends annotated data available only in high-resource languages to low-resource ones. Our results show that CORA substantially outperforms the previous state of the art on multilingual open QA benchmarks across 26 languages, 9 of which are unseen during training. Our analyses show the significance of cross-lingual retrieval and generation in many languages, particularly under low-resource settings.
null
LEADS: Learning Dynamical Systems that Generalize Across Environments
https://papers.nips.cc/paper_files/paper/2021/hash/3df1d4b96d8976ff5986393e8767f5b2-Abstract.html
Yuan Yin, Ibrahim Ayed, Emmanuel de Bézenac, Nicolas Baskiotis, Patrick Gallinari
https://papers.nips.cc/paper_files/paper/2021/hash/3df1d4b96d8976ff5986393e8767f5b2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12202-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3df1d4b96d8976ff5986393e8767f5b2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HD6CxZtbmIx
https://papers.nips.cc/paper_files/paper/2021/file/3df1d4b96d8976ff5986393e8767f5b2-Supplemental.zip
When modeling dynamical systems from real-world data samples, the distribution of data often changes according to the environment in which they are captured, and the dynamics of the system itself vary from one environment to another. Generalizing across environments thus challenges the conventional frameworks. The classical settings suggest either considering data as i.i.d and learning a single model to cover all situations or learning environment-specific models. Both are sub-optimal: the former disregards the discrepancies between environments leading to biased solutions, while the latter does not exploit their potential commonalities and is prone to scarcity problems. We propose LEADS, a novel framework that leverages the commonalities and discrepancies among known environments to improve model generalization. This is achieved with a tailored training formulation aiming at capturing common dynamics within a shared model while additional terms capture environment-specific dynamics. We ground our approach in theory, exhibiting a decrease in sample complexity w.r.t classical alternatives. We show how theory and practice coincides on the simplified case of linear dynamics. Moreover, we instantiate this framework for neural networks and evaluate it experimentally on representative families of nonlinear dynamics. We show that this new setting can exploit knowledge extracted from environment-dependent data and improves generalization for both known and novel environments.
null
Storchastic: A Framework for General Stochastic Automatic Differentiation
https://papers.nips.cc/paper_files/paper/2021/hash/3dfe2f633108d604df160cd1b01710db-Abstract.html
Emile Krieken, Jakub Tomczak, Annette Ten Teije
https://papers.nips.cc/paper_files/paper/2021/hash/3dfe2f633108d604df160cd1b01710db-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12203-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3dfe2f633108d604df160cd1b01710db-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KAFyFabsK88
https://papers.nips.cc/paper_files/paper/2021/file/3dfe2f633108d604df160cd1b01710db-Supplemental.pdf
Modelers use automatic differentiation (AD) of computation graphs to implement complex Deep Learning models without defining gradient computations. Stochastic AD extends AD to stochastic computation graphs with sampling steps, which arise when modelers handle the intractable expectations common in Reinforcement Learning and Variational Inference. However, current methods for stochastic AD are limited: They are either only applicable to continuous random variables and differentiable functions, or can only use simple but high variance score-function estimators. To overcome these limitations, we introduce Storchastic, a new framework for AD of stochastic computation graphs. Storchastic allows the modeler to choose from a wide variety of gradient estimation methods at each sampling step, to optimally reduce the variance of the gradient estimates. Furthermore, Storchastic is provably unbiased for estimation of any-order gradients, and generalizes variance reduction techniques to higher-order gradient estimates. Finally, we implement Storchastic as a PyTorch library at github.com/HEmile/storchastic.
null
Concentration inequalities under sub-Gaussian and sub-exponential conditions
https://papers.nips.cc/paper_files/paper/2021/hash/3e33b970f21d2fc65096871ea0d2c6e4-Abstract.html
Andreas Maurer, Massimiliano Pontil
https://papers.nips.cc/paper_files/paper/2021/hash/3e33b970f21d2fc65096871ea0d2c6e4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12204-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3e33b970f21d2fc65096871ea0d2c6e4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WJPAqX5M-2
https://papers.nips.cc/paper_files/paper/2021/file/3e33b970f21d2fc65096871ea0d2c6e4-Supplemental.pdf
We prove analogues of the popular bounded difference inequality (also called McDiarmid's inequality) for functions of independent random variables under sub-gaussian and sub-exponential conditions. Applied to vector-valued concentration and the method of Rademacher complexities these inequalities allow an easy extension of uniform convergence results for PCA and linear regression to the case potentially unbounded input- and output variables.
null
Variance-Aware Off-Policy Evaluation with Linear Function Approximation
https://papers.nips.cc/paper_files/paper/2021/hash/3e6260b81898beacda3d16db379ed329-Abstract.html
Yifei Min, Tianhao Wang, Dongruo Zhou, Quanquan Gu
https://papers.nips.cc/paper_files/paper/2021/hash/3e6260b81898beacda3d16db379ed329-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12205-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3e6260b81898beacda3d16db379ed329-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FKmcLhJ4mn
https://papers.nips.cc/paper_files/paper/2021/file/3e6260b81898beacda3d16db379ed329-Supplemental.pdf
We study the off-policy evaluation (OPE) problem in reinforcement learning with linear function approximation, which aims to estimate the value function of a target policy based on the offline data collected by a behavior policy. We propose to incorporate the variance information of the value function to improve the sample efficiency of OPE. More specifically, for time-inhomogeneous episodic linear Markov decision processes (MDPs), we propose an algorithm, \texttt{VA-OPE}, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration. We show that our algorithm achieves a tighter error bound than the best-known result. We also provide a fine-grained characterization of the distribution shift between the behavior policy and the target policy. Extensive numerical experiments corroborate our theory.
null
A Provably Efficient Sample Collection Strategy for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/3e98410c45ea98addec555019bbae8eb-Abstract.html
Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric
https://papers.nips.cc/paper_files/paper/2021/hash/3e98410c45ea98addec555019bbae8eb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12206-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3e98410c45ea98addec555019bbae8eb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AvVDR8R-kQX
https://papers.nips.cc/paper_files/paper/2021/file/3e98410c45ea98addec555019bbae8eb-Supplemental.pdf
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An "objective-agnostic" sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples $b(s,a)$ needed in each state-action pair, requires $\widetilde{O}(B D + D^{3/2} S^2 A)$ time steps to collect the $B=\sum_{s,a} b(s,a)$ desired samples, in any unknown communicating MDP with $S$ states, $A$ actions and diameter $D$. Then we show how this general-purpose exploration algorithm can be paired with "objective-specific" strategies that prescribe the sample requirements to tackle a variety of settings — e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs — for which we obtain improved or novel sample complexity guarantees.
null
Improved Regret Bounds for Tracking Experts with Memory
https://papers.nips.cc/paper_files/paper/2021/hash/3e9f7c16bd1cdea78f8e2eea72dfdfbe-Abstract.html
James Robinson, Mark Herbster
https://papers.nips.cc/paper_files/paper/2021/hash/3e9f7c16bd1cdea78f8e2eea72dfdfbe-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12207-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3e9f7c16bd1cdea78f8e2eea72dfdfbe-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=V08W9xadLPV
null
We address the problem of sequential prediction with expert advice in a non-stationary environment with long-term memory guarantees in the sense of Bousquet and Warmuth [4]. We give a linear-time algorithm that improves on the best known regret bound [27]. This algorithm incorporates a relative entropy projection step. This projection is advantageous over previous weight-sharing approaches in that weight updates may come with implicit costs as in for example portfolio optimization. We give an algorithm to compute this projection step in linear time, which may be of independent interest.
null
Robustness of Graph Neural Networks at Scale
https://papers.nips.cc/paper_files/paper/2021/hash/3ea2db50e62ceefceaf70a9d9a56a6f4-Abstract.html
Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann
https://papers.nips.cc/paper_files/paper/2021/hash/3ea2db50e62ceefceaf70a9d9a56a6f4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12208-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3ea2db50e62ceefceaf70a9d9a56a6f4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=v_4XcXsAZUn
https://papers.nips.cc/paper_files/paper/2021/file/3ea2db50e62ceefceaf70a9d9a56a6f4-Supplemental.pdf
Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications. Yet, existing studies of their vulnerability to adversarial attacks rely on relatively small graphs. We address this gap and study how to attack and defend GNNs at scale. We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation despite optimizing over a number of parameters which is quadratic in the number of nodes. We show that common surrogate losses are not well-suited for global attacks on GNNs. Our alternatives can double the attack strength. Moreover, to improve GNNs' reliability we design a robust aggregation function, Soft Median, resulting in an effective defense at all scales. We evaluate our attacks and defense with standard GNNs on graphs more than 100 times larger compared to previous work. We even scale one order of magnitude further by extending our techniques to a scalable GNN.
null
Random Noise Defense Against Query-Based Black-Box Attacks
https://papers.nips.cc/paper_files/paper/2021/hash/3eb414bf1c2a66a09c185d60553417b8-Abstract.html
Zeyu Qin, Yanbo Fan, Hongyuan Zha, Baoyuan Wu
https://papers.nips.cc/paper_files/paper/2021/hash/3eb414bf1c2a66a09c185d60553417b8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12209-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3eb414bf1c2a66a09c185d60553417b8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZPSD4xZc6j8
https://papers.nips.cc/paper_files/paper/2021/file/3eb414bf1c2a66a09c185d60553417b8-Supplemental.pdf
The query-based black-box attacks have raised serious threats to machine learning models in many real applications. In this work, we study a lightweight defense method, dubbed Random Noise Defense (RND), which adds proper Gaussian noise to each query. We conduct the theoretical analysis about the effectiveness of RND against query-based black-box attacks and the corresponding adaptive attacks. Our theoretical results reveal that the defense performance of RND is determined by the magnitude ratio between the noise induced by RND and the noise added by the attackers for gradient estimation or local search. The large magnitude ratio leads to the stronger defense performance of RND, and it's also critical for mitigating adaptive attacks. Based on our analysis, we further propose to combine RND with a plausible Gaussian augmentation Fine-tuning (RND-GF). It enables RND to add larger noise to each query while maintaining the clean accuracy to obtain a better trade-off between clean accuracy and defense performance. Additionally, RND can be flexibly combined with the existing defense methods to further boost the adversarial robustness, such as adversarial training (AT). Extensive experiments on CIFAR-10 and ImageNet verify our theoretical findings and the effectiveness of RND and RND-GF.
null
SADGA: Structure-Aware Dual Graph Aggregation Network for Text-to-SQL
https://papers.nips.cc/paper_files/paper/2021/hash/3f1656d9668dffcf8119e3ecff873558-Abstract.html
Ruichu Cai, Jinjie Yuan, Boyan Xu, Zhifeng Hao
https://papers.nips.cc/paper_files/paper/2021/hash/3f1656d9668dffcf8119e3ecff873558-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12210-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3f1656d9668dffcf8119e3ecff873558-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HEzEy_V7LF3
null
The Text-to-SQL task, aiming to translate the natural language of the questions into SQL queries, has drawn much attention recently. One of the most challenging problems of Text-to-SQL is how to generalize the trained model to the unseen database schemas, also known as the cross-domain Text-to-SQL task. The key lies in the generalizability of (i) the encoding method to model the question and the database schema and (ii) the question-schema linking method to learn the mapping between words in the question and tables/columns in the database schema. Focusing on the above two key issues, we propose a \emph{Structure-Aware Dual Graph Aggregation Network} (SADGA) for cross-domain Text-to-SQL. In SADGA, we adopt the graph structure to provide a unified encoding model for both the natural language question and database schema. Based on the proposed unified modeling, we further devise a structure-aware aggregation method to learn the mapping between the question-graph and schema-graph. The structure-aware aggregation method is featured with \emph{Global Graph Linking}, \emph{Local Graph Linking} and \emph{Dual-Graph Aggregation Mechanism}. We not only study the performance of our proposal empirically but also achieved 3rd place on the challenging Text-to-SQL benchmark Spider at the time of writing.
null
Near-Optimal Offline Reinforcement Learning via Double Variance Reduction
https://papers.nips.cc/paper_files/paper/2021/hash/3f24bb08a5741e4197af64e1f93a5029-Abstract.html
Ming Yin, Yu Bai, Yu-Xiang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/3f24bb08a5741e4197af64e1f93a5029-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12211-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3f24bb08a5741e4197af64e1f93a5029-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MlFcgL2AP4d
https://papers.nips.cc/paper_files/paper/2021/file/3f24bb08a5741e4197af64e1f93a5029-Supplemental.pdf
We consider the problem of offline reinforcement learning (RL) --- a well-motivated setting of RL that aims at policy optimization using only historical data. Despite its wide applicability, theoretical understandings of offline RL, such as its optimal sample complexity, remain largely open even in basic settings such as \emph{tabular} Markov Decision Processes (MDPs). In this paper, we propose \emph{Off-Policy Double Variance Reduction} (OPDVR), a new variance reduction-based algorithm for offline RL. Our main result shows that OPDVR provably identifies an $\epsilon$-optimal policy with $\widetilde{O}(H^2/d_m\epsilon^2)$ episodes of offline data in the finite-horizon \emph{stationary transition} setting, where $H$ is the horizon length and $d_m$ is the minimal marginal state-action distribution induced by the behavior policy. This improves over the best-known upper bound by a factor of $H$. Moreover, we establish an information-theoretic lower bound of $\Omega(H^2/d_m\epsilon^2)$ which certifies that OPDVR is optimal up to logarithmic factors. Lastly, we show that OPDVR also achieves rate-optimal sample complexity under alternative settings such as the finite-horizon MDPs with non-stationary transitions and the infinite horizon MDPs with discounted rewards.
null
Joint Modeling of Visual Objects and Relations for Scene Graph Generation
https://papers.nips.cc/paper_files/paper/2021/hash/3f67fd97162d20e6fe27748b5b372509-Abstract.html
Minghao Xu, Meng Qu, Bingbing Ni, Jian Tang
https://papers.nips.cc/paper_files/paper/2021/hash/3f67fd97162d20e6fe27748b5b372509-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12212-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3f67fd97162d20e6fe27748b5b372509-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CVmU4xzMIFo
https://papers.nips.cc/paper_files/paper/2021/file/3f67fd97162d20e6fe27748b5b372509-Supplemental.pdf
An in-depth scene understanding usually requires recognizing all the objects and their relations in an image, encoded as a scene graph. Most existing approaches for scene graph generation first independently recognize each object and then predict their relations independently. Though these approaches are very efficient, they ignore the dependency between different objects as well as between their relations. In this paper, we propose a principled approach to jointly predict the entire scene graph by fully capturing the dependency between different objects and between their relations. Specifically, we establish a unified conditional random field (CRF) to model the joint distribution of all the objects and their relations in a scene graph. We carefully design the potential functions to enable relational reasoning among different objects according to knowledge graph embedding methods. We further propose an efficient and effective algorithm for inference based on mean-field variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning. Experimental results on both the relationship retrieval and zero-shot relationship retrieval tasks prove the efficiency and efficacy of our proposed approach.
null
Going Beyond Linear Transformers with Recurrent Fast Weight Programmers
https://papers.nips.cc/paper_files/paper/2021/hash/3f9e3767ef3b10a0de4c256d7ef9805d-Abstract.html
Kazuki Irie, Imanol Schlag, Róbert Csordás, Jürgen Schmidhuber
https://papers.nips.cc/paper_files/paper/2021/hash/3f9e3767ef3b10a0de4c256d7ef9805d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12213-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3f9e3767ef3b10a0de4c256d7ef9805d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ot2ORiBqTa1
https://papers.nips.cc/paper_files/paper/2021/file/3f9e3767ef3b10a0de4c256d7ef9805d-Supplemental.pdf
Transformers with linearised attention (''linear Transformers'') have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the '90s. However, the original FWP formulation is more general than the one of linear Transformers: a slow neural network (NN) continually reprograms the weights of a fast NN with arbitrary architecture. In existing linear Transformers, both NNs are feedforward and consist of a single layer. Here we explore new variations by adding recurrence to the slow and fast nets. We evaluate our novel recurrent FWPs (RFWPs) on two synthetic algorithmic tasks (code execution and sequential ListOps), Wikitext-103 language models, and on the Atari 2600 2D game environment. Our models exhibit properties of Transformers and RNNs. In the reinforcement learning setting, we report large improvements over LSTM in several Atari games. Our code is public.
null
Reinforced Few-Shot Acquisition Function Learning for Bayesian Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/3fab5890d8113d0b5a4178201dc842ad-Abstract.html
Bing-Jing Hsieh, Ping-Chun Hsieh, Xi Liu
https://papers.nips.cc/paper_files/paper/2021/hash/3fab5890d8113d0b5a4178201dc842ad-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12214-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3fab5890d8113d0b5a4178201dc842ad-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MSr3u_FCRW
https://papers.nips.cc/paper_files/paper/2021/file/3fab5890d8113d0b5a4178201dc842ad-Supplemental.pdf
Bayesian optimization (BO) conventionally relies on handcrafted acquisition functions (AFs) to sequentially determine the sample points. However, it has been widely observed in practice that the best-performing AF in terms of regret can vary significantly under different types of black-box functions. It has remained a challenge to design one AF that can attain the best performance over a wide variety of black-box functions. This paper aims to attack this challenge through the perspective of reinforced few-shot AF learning (FSAF). Specifically, we first connect the notion of AFs with Q-functions and view a deep Q-network (DQN) as a surrogate differentiable AF. While it serves as a natural idea to combine DQN and an existing few-shot learning method, we identify that such a direct combination does not perform well due to severe overfitting, which is particularly critical in BO due to the need of a versatile sampling policy. To address this, we present a Bayesian variant of DQN with the following three features: (i) It learns a distribution of Q-networks as AFs based on the Kullback-Leibler regularization framework. This inherently provides the uncertainty required in sampling for BO and mitigates overfitting. (ii) For the prior of the Bayesian DQN, we propose to use a demo policy induced by an off-the-shelf AF for better training stability. (iii) On the meta-level, we leverage the meta-loss of Bayesian model-agnostic meta-learning, which serves as a natural companion to the proposed FSAF. Moreover, with the proper design of the Q-networks, FSAF is general-purpose in that it is agnostic to the dimension and the cardinality of the input domain. Through extensive experiments, we demonstrate that the FSAF achieves comparable or better regrets than the state-of-the-art benchmarks on a wide variety of synthetic and real-world test functions.
null
Forster Decomposition and Learning Halfspaces with Noise
https://papers.nips.cc/paper_files/paper/2021/hash/3ff4cea152080fd7d692a8286a587a67-Abstract.html
Ilias Diakonikolas, Daniel Kane, Christos Tzamos
https://papers.nips.cc/paper_files/paper/2021/hash/3ff4cea152080fd7d692a8286a587a67-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12215-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3ff4cea152080fd7d692a8286a587a67-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=l4DQWgjbZg
https://papers.nips.cc/paper_files/paper/2021/file/3ff4cea152080fd7d692a8286a587a67-Supplemental.pdf
A Forster transform is an operation that turns a multivariate distribution into one with good anti-concentration properties. While a Forster transform does not always exist, we show that any distribution can be efficiently decomposed as a disjoint mixture of few distributions for which a Forster transform exists and can be computed efficiently. As the main application of this result, we obtain the first polynomial-time algorithm for distribution-independent PAC learning of halfspaces in the Massart noise model with strongly polynomial sample complexity, i.e., independent of the bit complexity of the examples. Previous algorithms for this learning problem incurred sample complexity scaling polynomially with the bit complexity, even though such a dependence is not information-theoretically necessary.
null
Cortico-cerebellar networks as decoupling neural interfaces
https://papers.nips.cc/paper_files/paper/2021/hash/3ffebb08d23c609875d7177ee769a3e9-Abstract.html
Joseph Pemberton, Ellen Boven, Richard Apps, Rui Ponte Costa
https://papers.nips.cc/paper_files/paper/2021/hash/3ffebb08d23c609875d7177ee769a3e9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12216-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3ffebb08d23c609875d7177ee769a3e9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=G7kpTaNrfe
https://papers.nips.cc/paper_files/paper/2021/file/3ffebb08d23c609875d7177ee769a3e9-Supplemental.pdf
The brain solves the credit assignment problem remarkably well. For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish. How the brain deals with this inherent locking problem has remained unclear. Deep learning methods suffer from similar locking constraints both on the forward and feedback phase. Recently, decoupled neural interfaces (DNIs) were introduced as a solution to the forward and feedback locking problems in deep networks.Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve similar locking problems akin to DNIs. To demonstrate the potential of this framework we introduce a systems-level model in which a recurrent cortical network receives online temporal feedback predictions from a cerebellar module. We test this cortico-cerebellar recurrent neural network (ccRNN) model on a number of sensorimotor (line and digit drawing) and cognitive tasks (pattern recognition and caption generation) that have been shown to be cerebellar-dependent. In all tasks, we observe that ccRNNs facilitates learning while reducing ataxia-like behaviours, consistent with classical experimental observations. Moreover, our model also explains recent behavioural and neuronal observations while making several testable predictions across multiple levels.Overall, our work offers a novel perspective on the cerebellum as a brain-wide decoupling machine for efficient credit assignment and opens a new avenue between deep learning and neuroscience.
null
To The Point: Correspondence-driven monocular 3D category reconstruction
https://papers.nips.cc/paper_files/paper/2021/hash/40008b9a5380fcacce3976bf7c08af5b-Abstract.html
Filippos Kokkinos, Iasonas Kokkinos
https://papers.nips.cc/paper_files/paper/2021/hash/40008b9a5380fcacce3976bf7c08af5b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12217-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/40008b9a5380fcacce3976bf7c08af5b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AWMU04iXQ08
https://papers.nips.cc/paper_files/paper/2021/file/40008b9a5380fcacce3976bf7c08af5b-Supplemental.pdf
We present To The Point (TTP), a method for reconstructing 3D objects from a single image using 2D to 3D correspondences given only foreground masks, a category specific template and optionally sparse keypoints for supervision. We recover a 3D shape from a 2D image by first regressing the 2D positions corresponding to the 3D template vertices and then jointly estimating a rigid camera transform and non-rigid template deformation that optimally explain the 2D positions through the 3D shape projection. By relying on correspondences we use a simple per-sample optimization problem to replace CNN-based regression of camera pose and non-rigid deformation and thereby obtain substantially more accurate 3D reconstructions. We treat this optimization as a differentiable layer and train the whole system in an end-to-end manner using geometry-driven losses. We report systematic quantitative improvements on multiple categories and provide qualitative results comprising diverse shape, poses and texture prediction examples.
null
Proper Value Equivalence
https://papers.nips.cc/paper_files/paper/2021/hash/400e5e6a7ce0c754f281525fae75a873-Abstract.html
Christopher Grimm, Andre Barreto, Greg Farquhar, David Silver, Satinder Singh
https://papers.nips.cc/paper_files/paper/2021/hash/400e5e6a7ce0c754f281525fae75a873-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12218-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/400e5e6a7ce0c754f281525fae75a873-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aXbuWbta0V8
https://papers.nips.cc/paper_files/paper/2021/file/400e5e6a7ce0c754f281525fae75a873-Supplemental.pdf
One of the main challenges in model-based reinforcement learning (RL) is to decide which aspects of the environment should be modeled. The value-equivalence (VE) principle proposes a simple answer to this question: a model should capture the aspects of the environment that are relevant for value-based planning. Technically, VE distinguishes models based on a set of policies and a set of functions: a model is said to be VE to the environment if the Bellman operators it induces for the policies yield the correct result when applied to the functions. As the number of policies and functions increase, the set of VE models shrinks, eventually collapsing to a single point corresponding to a perfect model. A fundamental question underlying the VE principle is thus how to select the smallest sets of policies and functions that are sufficient for planning. In this paper we take an important step towards answering this question. We start by generalizing the concept of VE to order-$k$ counterparts defined with respect to $k$ applications of the Bellman operator. This leads to a family of VE classes that increase in size as $k \rightarrow \infty$. In the limit, all functions become value functions, and we have a special instantiation of VE which we call proper VE or simply PVE. Unlike VE, the PVE class may contain multiple models even in the limit when all value functions are used. Crucially, all these models are sufficient for planning, meaning that they will yield an optimal policy despite the fact that they may ignore many aspects of the environment. We construct a loss function for learning PVE models and argue that popular algorithms such as MuZero can be understood as minimizing an upper bound for this loss. We leverage this connection to propose a modification to MuZero and show that it can lead to improved performance in practice.
null
Challenges and Opportunities in High Dimensional Variational Inference
https://papers.nips.cc/paper_files/paper/2021/hash/404dcc91b2aeaa7caa47487d1483e48a-Abstract.html
Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael R. Andersen, Jonathan Huggins, Aki Vehtari
https://papers.nips.cc/paper_files/paper/2021/hash/404dcc91b2aeaa7caa47487d1483e48a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12219-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/404dcc91b2aeaa7caa47487d1483e48a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_A4-JP8d_f
https://papers.nips.cc/paper_files/paper/2021/file/404dcc91b2aeaa7caa47487d1483e48a-Supplemental.pdf
Current black-box variational inference (BBVI) methods require the user to make numerous design choices – such as the selection of variational objective and approximating family – yet there is little principled guidance on how to do so. We develop a conceptual framework and set of experimental tools to understand the effects of these choices, which we leverage to propose best practices for maximizing posterior approximation accuracy. Our approach is based on studying the pre-asymptotic tail behavior of the density ratios between the joint distribution and the variational approximation, then exploiting insights and tools from the importance sampling literature. Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors. In the latter case, we show that mass-covering variational objectives are difficult to optimize and do not improve accuracy, but flexible variational families can improve accuracy and the effectiveness of importance sampling – at the cost of additional optimization challenges. Therefore, for moderate-to-high-dimensional posteriors we recommend using the (mode-seeking) exclusive KL divergence since it is the easiest to optimize, and improving the variational family or using model parameter transformations to make the posterior and optimal variational approximation more similar. On the other hand, in low-dimensional settings, we show that heavy-tailed variational families and mass-covering divergences are effective and can increase the chances that the approximation can be improved by importance sampling.
null
On the Expressivity of Markov Reward
https://papers.nips.cc/paper_files/paper/2021/hash/4079016d940210b4ae9ae7d41c4a2065-Abstract.html
David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael Littman, Doina Precup, Satinder Singh
https://papers.nips.cc/paper_files/paper/2021/hash/4079016d940210b4ae9ae7d41c4a2065-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12220-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4079016d940210b4ae9ae7d41c4a2065-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9DlCh34E1bN
https://papers.nips.cc/paper_files/paper/2021/file/4079016d940210b4ae9ae7d41c4a2065-Supplemental.pdf
Reward is the driving force for reinforcement-learning agents. This paper is dedicated to understanding the expressivity of reward as a way to capture tasks that we would want an agent to perform. We frame this study around three new abstract notions of “task” that might be desirable: (1) a set of acceptable behaviors, (2) a partial ordering over behaviors, or (3) a partial ordering over trajectories. Our main results prove that while reward can express many of these tasks, there exist instances of each task type that no Markov reward function can capture. We then provide a set of polynomial-time algorithms that construct a Markov reward function that allows an agent to optimize tasks of each of these three types, and correctly determine when no such reward function exists. We conclude with an empirical study that corroborates and illustrates our theoretical findings.
null
One More Step Towards Reality: Cooperative Bandits with Imperfect Communication
https://papers.nips.cc/paper_files/paper/2021/hash/40cb228987243c91b2dd0b7c9c4a0856-Abstract.html
Udari Madhushani, Abhimanyu Dubey, Naomi Leonard, Alex Pentland
https://papers.nips.cc/paper_files/paper/2021/hash/40cb228987243c91b2dd0b7c9c4a0856-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12221-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/40cb228987243c91b2dd0b7c9c4a0856-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PmJVah9D8B
https://papers.nips.cc/paper_files/paper/2021/file/40cb228987243c91b2dd0b7c9c4a0856-Supplemental.pdf
The cooperative bandit problem is increasingly becoming relevant due to its applications in large-scale decision-making. However, most research for this problem focuses exclusively on the setting with perfect communication, whereas in most real-world distributed settings, communication is often over stochastic networks, with arbitrary corruptions and delays. In this paper, we study cooperative bandit learning under three typical real-world communication scenarios, namely, (a) message-passing over stochastic time-varying networks, (b) instantaneous reward-sharing over a network with random delays, and (c) message-passing with adversarially corrupted rewards, including byzantine communication. For each of these environments, we propose decentralized algorithms that achieve competitive performance, along with near-optimal guarantees on the incurred group regret as well. Furthermore, in the setting with perfect communication, we present an improved delayed-update algorithm that outperforms the existing state-of-the-art on various network topologies. Finally, we present tight network-dependent minimax lower bounds on the group regret. Our proposed algorithms are straightforward to implement and obtain competitive empirical performance.
null
Multi-Agent Reinforcement Learning in Stochastic Networked Systems
https://papers.nips.cc/paper_files/paper/2021/hash/412604be30f701b1b1e3124c252065e6-Abstract.html
Yiheng Lin, Guannan Qu, Longbo Huang, Adam Wierman
https://papers.nips.cc/paper_files/paper/2021/hash/412604be30f701b1b1e3124c252065e6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12222-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/412604be30f701b1b1e3124c252065e6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NHX9w7ex3fW
https://papers.nips.cc/paper_files/paper/2021/file/412604be30f701b1b1e3124c252065e6-Supplemental.pdf
We study multi-agent reinforcement learning (MARL) in a stochastic network of agents. The objective is to find localized policies that maximize the (discounted) global reward. In general, scalability is a challenge in this setting because the size of the global state/action space can be exponential in the number of agents. Scalable algorithms are only known in cases where dependencies are static, fixed and local, e.g., between neighbors in a fixed, time-invariant underlying graph. In this work, we propose a Scalable Actor Critic framework that applies in settings where the dependencies can be non-local and stochastic, and provide a finite-time error bound that shows how the convergence rate depends on the speed of information spread in the network. Additionally, as a byproduct of our analysis, we obtain novel finite-time convergence results for a general stochastic approximation scheme and for temporal difference learning with state aggregation, which apply beyond the setting of MARL in networked systems.
null
Neural Scene Flow Prior
https://papers.nips.cc/paper_files/paper/2021/hash/41263b9a46f6f8f22668476661614478-Abstract.html
Xueqian Li, Jhony Kaesemodel Pontes, Simon Lucey
https://papers.nips.cc/paper_files/paper/2021/hash/41263b9a46f6f8f22668476661614478-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12223-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/41263b9a46f6f8f22668476661614478-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4c1EiEvivpx
https://papers.nips.cc/paper_files/paper/2021/file/41263b9a46f6f8f22668476661614478-Supplemental.zip
Before the deep learning revolution, many perception algorithms were based on runtime optimization in conjunction with a strong prior/regularization penalty. A prime example of this in computer vision is optical and scene flow. Supervised learning has largely displaced the need for explicit regularization. Instead, they rely on large amounts of labeled data to capture prior statistics, which are not always readily available for many problems. Although optimization is employed to learn the neural network, at runtime, the weights of this network are frozen. As a result, these learning solutions are domain-specific and do not generalize well to other statistically different scenarios. This paper revisits the scene flow problem that relies predominantly on runtime optimization and strong regularization. A central innovation here is the inclusion of a neural scene flow prior, which utilizes the architecture of neural networks as a new type of implicit regularizer. Unlike learning-based scene flow methods, optimization occurs at runtime, and our approach needs no offline datasets---making it ideal for deployment in new environments such as autonomous driving. We show that an architecture based exclusively on multilayer perceptrons (MLPs) can be used as a scene flow prior. Our method attains competitive---if not better---results on scene flow benchmarks. Also, our neural prior's implicit and continuous scene flow representation allows us to estimate dense long-term correspondences across a sequence of point clouds. The dense motion information is represented by scene flow fields where points can be propagated through time by integrating motion vectors. We demonstrate such a capability by accumulating a sequence of lidar point clouds.
null