id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
iclr_2018_SktLlGbRZ | Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains. | This paper essentially uses CycleGANs for Domain Adaptation. My biggest concern is that it doesn't adequately compare to similar papers that perform adaptation at the pixel level (eg. Shrivastava et al-'Learning from Simulated and Unsupervised Images through Adversarial Training' and Bousmalis et al - 'Unsupervised Pixel-level Domain Adaptation with GANs', two similar papers published in CVPR 2017 -the first one was even a best paper- and available on arXiv since December 2016-before CycleGANs). I believe the authors should have at least done an ablation study to see if the cycle-consistency loss truly makes a difference on top of these works-that would be the biggest selling point of this paper. The experimental section had many experiments, which is great. However I think for semantic segmentation it would be very interesting to see whether using the adapted synthetic GTA5 samples would improve the SOTA on Cityscapes. It wouldn't be unsupervised domain adaptation, but it would be very impactful. Finally I'm not sure the oracle (train on target) mIoU on Table 2 is SOTA, and I believe the proposed model's performance is really far from SOTA.
Pros:
* CycleGANs for domain adaptation! Great idea!
* I really like the work on semantic segmentation, I think this is a very important direction
Cons:
* I don't think Domain separation networks is a pixel-level transformation-that's a feature-level transformation, you probably mean to use Bousmalis et al. 2017. Also Shrivastava et al is missing from the image-level papers.
* the authors claim that Bousmalis et al, Liu & Tuzel and Shrivastava et al ahve only been shown to work for small image sizes. There's a recent work by Bousmalis et al. (Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping) that shows these methods working well (w/o cycle-consistency) for settings similar to semantic segmentation at a relatively high resolution. Also it was mentioned that these methods do not necessarily preserve content, when pixel-da explicitly accounts for that with a task loss (identical to the semantic loss used in this submission)
* The authors talk about the content similarity loss on the foreground in Bousmalis et al. 2017, but they could compare to this method w/o using the content similarity or using a different content similarity tailored to the semantic segmentation tasks, which would be trivial.
* Math seems wrong in (4) and (6). (4) should be probably have a minus instead of a plus. (6) has an argmin of a min, not sure what is being optimized here. In fact, I'm not sure if eg you use the gradients of f_T for training the generators?
* The authors mention that the pixel-da approach cross validates with some labeled data. Although I agree that is not an ideal validation, I'm not sure if it's equivalent or not the authors' validation setting, as they don't describe what that is.
* The authors present the semantic loss as novel, however this is the task loss proposed by the pixel-da paper.
* I didn't understand what pixel-only and feat-only meant in tables 2, 3, 4. I couldn't find an explanation in captions or in text
=====
Post rebuttal comments:
Thanks for adding content in response to my comments. The cycle ablation is still a sticky point for me, and I'm still left not sure if cycle-consistency really offers an improvement. Although I applaud your offering examples of failures for when there's no cycle-consistency, these are circumstantial and not quantitative. The reader is still left wondering why and when is the cycle-consistency loss is appropriate. As this is the main novelty, I believe this should be in the forefront of the experimental evaluation. |
iclr_2018_S1tWRJ-R- | The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a datadriven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network. | The work proposed a generic framework for end-to-end transfer learning / domain adaptation with deep neural networks. The idea is to learn a joint autoencoders, containing private branch with task/domain-specific weights, as well as common branch consisting of shared weights used across tasks/domains, as well as task/domain-specific weights. Supervised losses are added after the encoders to utilize labeled samples from different tasks. Experiments on the MNIST and CIFAR datasets showed improvements over baseline models. Its performance is comparable to / worse than several existing deep domain adaptation works on the MNIST, USPS and SVHN digit datasets.
The structure of the paper is good, and easy to read. The idea is fairly straight-forward. It reads as an extension of "frustratingly easy domain adaptation" to DNN (please cite this work). Different from most existing work on DNN for multi-task/transfer learning, which focuses on weight sharing in bottom layers, the work emphasizes the importance of weight sharing in deeper layers. The overall novelty of the work is limited though.
The authors brought up two strategies on learning the shared and private weights at the end of section 3.2. However, no follow-up comparison between the two are provided. It seems like most of the results are coming from the end-to-end learning.
Experimental results:
section 4.1: Figure 2 is flawed. The colors do not correspond to the sub-tasks. For example, there are digits 1, 4 in color magenta, which is supposed to be the shared branch of digits of 5~9. Vice versa.
From reducing the capacity of JAE to be the same as the baseline, most of the improvement is gone. It is not clear how much of the improvement will remain if the baseline model gets to see all the samples instead of just those from each sub-task.
section 4.2.1: The authors demonstrate the influence of shared layer depth in table 2. While it does seem to matter for tasks of dissimilar inputs, have the authors compare having a completely shared branch or sharing more than just a single layer?
The authors suggested in section 4.1 CIFAR experiment that the proposed method provides more performance boost when the two tasks are more similar, which seems to be contradicting to the results shown in Figure 3, where its performance is worse when transferring between USPS and MNIST, which are more similar tasks vs between SVHN and MNIST. Do the authors have any insight? |
iclr_2018_HkOhuyA6- | Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations. Convolutional Neural Networks (CNNs) offer a very appealing alternative. However, processing graphs with CNNs is not trivial. To address this challenge, many sophisticated extensions of CNNs have recently been proposed. In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. Despite its simplicity, our method proves very competitive to state-ofthe-art graph kernels and graph CNNs, and outperforms them by a wide margin on some datasets. It is also preferable to graph kernels in terms of time complexity. Code and data are publicly available 1 . | The authors propose to use 2D CNNs for graph classification by transforming graphs to an image-like representation from its node embedding. The approach uses node2vec to obtain a node embedding, which is then compacted using PCA and turned into a stack of discretized histograms. Essentially the authors propose an approach to use a node embedding to achieve graph classification.
In my opinion there are several weak points:
1) The approach to obtain the image-like representation is not well motivated. Other approaches how to aggregate the set of node embeddings for graph classification are known, see, e.g., "Representation Learning on Graphs: Methods and Applications", William L. Hamilton, Rex Ying, Jure Leskovec, 2017. The authors should compare to such methods as a baseline.
2) The experimental evaluation is not convincing:
- the selection of competing methods is not sufficient. I would like to suggest to add an approach similar to Duvenaud et al., "Convolutional networks on graphs for learning molecular fingerprints", NIPS 2015.
- the accuracy results are taken from other publications and it is not clear that this is an authoritative comparison; the accuracy results published for state-of-the-art graph kernels are superior to those obtained by the proposed method, cf., e.g., Kriege et al., "On Valid Optimal Assignment Kernels and Applications to Graph Classification", NIPS 2016.
- it would be interesting to apply the approach to graphs with discrete and continuous labels.
3) The authors argue that their method is preferable to graph kernels in terms of time complexity. This argument is questionable. Most graph kernels compute explicit feature maps and can therefore be used with efficient linear SVMs (unfortunately most publications use a kernelized SVM). Moreover, the running of computing the node embedding must be emphasized: On page 2 the authors claim a "constant time complexity at the instance level", which is not true when also considering the running time of node2vec. Moreover, I do not think that node2vec is more efficient than, e.g., Weisfeiler-Lehman refinement used by graph kernels.
In summary: Since the technical contribution is limited, the approach needs to be justified by an authoritative experimental comparison. This is not yet achieved with the results presented in the submitted paper. Therefore, it should not be accepted in its current form. |
iclr_2018_HJcSzz-CZ | Published as a conference paper at ICLR 2018 META-LEARNING FOR SEMI-SUPERVISED FEW-SHOT CLASSIFICATION
In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corresponding test set. In this work, we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode. We consider two situations: one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode, as well as the more challenging situation where examples from other distractor classes are also provided. To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical structure. Our experiments confirm that our Prototypical Networks can learn to improve their predictions due to unlabeled examples, much like a semi-supervised algorithm would. | This paper proposes to extend the Prototypical Network (NIPS17) to the semi-supervised setting with three possible
strategies. One consists in self-labeling the unlabeled data and then updating the prototypes on the basis of the
assigned pseudo-labels. Another is able to deal with the case of distractors i.e. unlabeled samples not beloning to
any of the known categories. In practice this second solution is analogous to the first, but a general 'distractor' class
is added. Finally the third technique learns to weight the samples according to their distance to the original prototypes.
These strategies are evaluated in a particular semi-supervised transfer learning setting: the models are first trained
on some source categories with few labeled data and large unlabeled samples (this setting is derived by subselecting
multiple times a large dataset), then they are used on a final target task with again few labeled data and large
unlabeled samples but beloning to a different set of categories.
+ the paper is well written, well organized and overall easy to read
+/- this work builds largely on previous work. It introduces only some small technical novelty inspired by soft-k-means
clustering that anyway seems to be effective.
+ different aspect of the problem are analyzed by varying the number of disctractors and varying the level of
semantic relatedness between the source and the target sets
Few notes and questions
1) why for the omniglot experiment the table reports the error results? It would be better to present accuracy as for the other tables/experiments
2) I would suggest to use source and target instead of train and test -- these two last terms are confusing because
actually there is a training phase also at test time.
3) although the paper indicate that there are different other few-shot methods that could be applicable here,
no other approach is considered besides the prothotipical network and its variants. An further external reference
could be used to give an idea of what would be the experimental result at least in the supervised case. |
iclr_2020_SJlNnhVYDr | We propose a model to tackle classification tasks in the presence of very little training data. To this aim, we introduce a novel matching mechanism to focus on elements of the input, by using vectors that represent semantically meaningful concepts for the task at hand. By leveraging highlighted portions of the training data, a simple, yet effective, error boosting technique guides the learning process. In practice, it increases the error associated to relevant parts of the input by a given factor. Results on text classification tasks confirm the benefits of the proposed approach in both balanced and unbalanced cases, thus being of practical use when labeling new examples is expensive. In addition, the model is interpretable, as it allows for human inspection of the learned weights. | After responses:
I read the authors response and decided to stick to my original score mostly because:
1 - I understand that interpretability is hard to define. I also agree with the authors response. However, this is still not reflected in the paper in any way. I expect a discussion on what is the relevant definition used in the paper and how does it fit to that definition. Currently, it is very confusing to the reader.
2 - I understand the authors' response that few-shot learning is a different empirical setting. However, authors also agree that settings are some-what relevant. I really do not see any gain by NOT discussing the few-shot learning literature. At the end, a reader is interested in this work if they have limited data. Moreover, other ways to address limited data issue should be discussed.
-----
The manuscript is proposing a few-shot classification setting in which training set includes only few examples. The main contribution is using prototype embeddings and representing each word as cosine distances to these prototype embeddings. Moreover, the final classification is weighted summation of the per-token decisions followed by a soft-max. Per-token classifiers are obtained with an MLP using the cosine distances as features. When the relevance labels are available, they are used in training to boost gradients.
PRO(s)
The proposed method is interesting and addressing an important problem. There are many few-shot scenarios and finding good models for them is impactful.
The results are promising and the proposed method is more interpretable than the existing NLP classifiers. I disagree with the claim that the model is interpretable. However, I appreciate the effort to interpret the model.
CON(s)
The model is not interpretable because 1) it starts with embeddings and they are not interpretable, 2) model is full of non-linearities and decision boundaries are not possible to find. In other words, it is not possible to answer "what would make this model predict some other classifier".
The authors should discuss the existing few-shot learning mechanisms. Especially, "Prototypical Networks for Few-shot Learning" is very relevant. I also think it can be included as a baseline with very minimal modifications.
The writing is not complete. The authors do not even discuss how the prototypes are learned. I am assuming it is done using full gradient-descent over all parameters. However, this is not clearly discussed. Implementation details should be discussed more clearly.
SUMMARY
I believe the manuscript is definitely interesting and has a potential. In the mean time, It is not ready for publication. It needs a through review of few-shot learning. Authors should also discuss can any of the few-shot learning methods be included in the experimental study. If the answer is yes, it should be. If the answer is no, it should be explained clearly.
Although my recommendation is weak-reject, I am happy to bump it up if these issues are addressed. |
iclr_2020_rklz16Vtvr | Recent years have witnessed growing interests in designing efficient neural networks and neural architecture search (NAS). Although remarkable efficiency and accuracy have been achieved, existing expert designed and NAS models neglect the fact that input instances are of varying complexity and thus different amounts of computation are required. Inference with a fixed model that processes all instances through the same transformations would incur computational resources unnecessarily. Customizing the model capacity in an instance-aware manner is required to alleviate such a problem. In this paper, we propose a novel Instanceaware Selective Branching Network-ISBNet to support efficient instance-level inference by selectively bypassing transformation branches of insignificant importance weight. These weights are dynamically determined by a lightweight hypernetwork SelectionNet and recalibrated by gumbel-softmax for sparse branch selection. Extensive experiments show that ISBNet achieves extremely efficient inference in terms of parameter size and FLOPs comparing to existing networks. For example, ISBNet takes only 8.70% parameters and 31.01% FLOPs of the efficient network MobileNetV2 with comparable accuracy on CIFAR-10. | This paper proposes an instance-aware dynamic network, ISBNet, for efficient image classification. The network consists of layers of cell structures with multiple branches within. During the inference, the network uses SelectionNet to compute a "calibration weight matrix", which essentially controls which branches within the cell should be used to compute the output. Similar to previous works in NAS, this paper uses Gumbel Softmax to compute the branch selection probability. The network is trained to minimize a loss function that considers both the accuracy and the inference cost. Training of the network is divided into two stages: First, a high temperature is used to ensure all the branches are sufficiently optimized, and at the second stage, the authors aneal the temperature. During the inference, branches are selected if their probability computed by Gumbel Softmax is larger than a certain threshold.
Overall, the idea of the paper is clearly presented. The methods used in this paper are similar to previous works on neural architecture search (NAS), but this paper can be seen as a meaningful extension to NAS.
My main concern for this paper is the experiment section.
1) The paper claims that "ISBNet takes only 8.70% parameters and 31.01% FLOPs of the efficient network MobileNetV2 with comparable accuracy on CIFAR-10". Mainstream efficient networks, such as MobileNet and ShuffleNet, are not designed for CIFAR-10 datasets, but ImageNet datasets. Their downsampling strategy is very aggressive, which leads to relatively poor accuracy on CIFAR-10 datasets with 32x32 images. Therefore, it is not fair to compare the MobileNetV2 on CIFAR-10 and claim superiority over it. Compared with other networks customized for the CIFAR-10 dataset, such as NASNet-A, DARTS, and so on, the error rate of ISBNet is significantly worse (>1% higher than DARTS, or >33% relative increase in error rate).
2) In table 2, the paper compares ISBNet's performance on the ImageNet dataset with other baselines. However, the baseline models are not up to date. For example, MobileNetV2 achieves a 28% error rate with 300M FLOPs, FBNet achieves a 27% error rate with 249M FLOPs. These results, however, are not cited in this paper. Particularly, MobileNetV2 is compared against on the CIFAR-10 dataset, but not the ImageNet dataset.
3) ISBNet is not the first instance-aware dynamic network. Previous works such as SkipNet, Soft-Conditional computing [1] explored similar ideas. However, in this paper, there is no comparison with previous dynamic networks.
Overall, I would expect a much stronger experiment section for the paper to be published.
[1] https://arxiv.org/abs/1904.04971 |
iclr_2020_BkxDxJHFDr | Graph convolutional networks (GCNs) are powerful tools for graph-structured data. However, they have been recently shown to be vulnerable to topological attacks. To enhance adversarial robustness, we go beyond spectral graph theory to robust graph theory. By challenging the classical graph Laplacian, we propose a new convolution operator that is provably robust in the spectral domain and is incorporated in the GCN architecture to improve expressivity and interpretability. By extending the original graph to a sequence of graphs, we also propose a robust training paradigm that encourages transferability across graphs that span a range of spatial and spectral characteristics. The proposed approaches are demonstrated in extensive experiments to simultaneously improve performance in both benign and adversarial situations. | This paper proposes a new architecture for graph convolutional network based on graph powering operation which generates a new graph based on the shortest distance between pair of nodes. Its main motivation is to overcome the dominance of the first eigenvector in the existing GCN architectures based on the graph Laplacian operator. The theoretical evidence for the robustness is provided based on the signal-to-noise (SNR) ratio of the simplified stochastic block model (SBM). Two versions of the algorithms are proposed, namely the robust graph convolutional network (r-GCN) and variable power network (VPN). First, r-GCN is based on augmenting the graphs with graph powering operation. Next, VPN replaces the adjacency matrix of the graph convolutional operator by the newly proposed variable power operator. An additional sparsification scheme is proposed since the graph powering operation densifies the original graph.
Overall, I like how the paper addresses the weakness of the existing graph Laplacian operators (dominance of the first eigenvector) and proposed a new method with theoretical justifications. Experiments were conducted thoroughly and results look great in the presented datasets. However, I also have concerns about the paper that I feel necessary to be resolved.
Most importantly, the concept of "robustness" in GCN seems to be inconsistent throughout the paper. Namely, the meaning of robustness in the neural network (adversarial robustness) and the SBM literature (spectral robustness) are different. This point is crucial since the paper use the spectral robustness for justification of the method, yet experiments are done on the adversarial attacks. More specifically, adversarial training methods for neural networks, e.g., adversarial attack methods [1] considered in the paper, typically make the loss function (or output of network) more persistent against the small perturbation of inputs. On the other side, the robustness for SBM models, e.g., Theorem 3 in the paper, cares more about the preservation of the original input characteristics. For illustration, an invertible neural network [2] is not necessarily robust to adversarial attacks (the first meaning of robustness) but preserves all the input characteristics (the second meaning of robustness).
I also hope the paper could have done the experiments on more datasets since there exists some evidence on the unreliability of evaluations on citation networks [3]. However, I do not think this point is critical since the paper did a great job of evaluating the robustness in various aspects and they all show consistent improvement.
Minor questions and suggestions:
- The acronyms are slightly confusing to understand at first sight, since they first appear at the equations without any information on what the letters stand for. Something like a "variable power network (VPN)" would make the paper more pleasant to read.
- In the r-GCN framework, there might be an edge case where the powered graph is almost identical to another graph. Would there be any justification for avoiding this?
- In the r-GCN framework, the terminology distillation is slightly confusing. Was this choice of word used for making a connection to the knowledge distillation [4]? How is the knowledge distilled between graphs?
References
[1] Bojchevski and Günnemann. Adversarial attacks on node embeddings via graph poisoning. ICML 2019
[2] Jacobsen et al., i-RevNet: Deep Invertible Networks. ICLR 2018
[3] Shchur et al., Pitfalls of Graph Neural Network Evaluation, Arxiv 2018
[4] Hinton et al., Distilling the Knowledge in a Neural Network, Arxiv 2015 |
iclr_2020_SygcSlHFvS | Many methods have been developed to represent knowledge graph data, which implicitly exploit low-rank latent structure in the data to encode known information and enable unknown facts to be inferred. To predict whether a relationship holds between entities, their embeddings are typically compared in the latent space following a relation-specific mapping. Whilst link prediction has steadily improved, the latent structure, and hence why such models capture semantic information, remains unexplained. We build on recent theoretical interpretation of word embeddings to derive an explicit structure for the representations of relations between entities. From this we explain properties and justify the relative performance of leading knowledge graph representation methods for identifiable relation types, including their often overlooked ability to make independent predictions. | This paper proposes to provide a detailed study on the explainability of link prediction (LP) models by utilizing a recent interpretation of word embeddings. More specifically, the authors categorize the relations in KG into three categories (R, S, C) using the correlation between the semantic relation between two words and the geometric relationship between their embeddings. The authors utilize this categorization to provide a better understanding of LP models’ performance through several experiments.
This paper reads well and the results appear sound. I personally believe that works on better understanding KGC models are a very essential direction which is mostly ignored in this field of study. Moreover, the provided experiments support the authors’ intuition and arguments.
As for the drawbacks, I find the technical novelty of the paper is somewhat limited, as the proposed method consists of a mostly straightforward combination of existing methods. Further, I believe this work needs more experimental results and decisive conclusions identifying future directions to achieve better performance on link prediction. My concerns are as follows:
• I am wondering about the reason for omitting Max/Avg path for two of the relations in WN18RR? Further, the average of 15.2 for the shortest path between entities with “also_see” relation appears to be a mistake?
• Was there any specific reason in choosing WN18RR and NELL-995 KGs for the experiments?
• It would be interesting to see the length of paths between entities for train and test data separately.
• I suggest providing a statistical significance evaluation for each experiment to better validate the conclusions.
• I find the provided study in section 4.2 very similar to the triple classification task in KGs. Can you elaborate on the differences and potential advantages of your setting?
• I am wondering how you identified the “Other True” triples for WN18RR KG in section 4.2 experiments?
On overall, although I find the proposed study very interesting and enlightening, I believe that the paper needs more experimental results and decisive conclusions. |
iclr_2020_HJxMYANtPH | This paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on real-life and synthetic datasets, whereas this is not observed in linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn corroborates the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss some implications of local elasticity to shed light on several intriguing aspects of deep neural networks. | [Summary]
This paper proposes and studies the “local elasticity”, a quantitative measure for the ability of neural networks to only locally change its prediction (around x) after a stochastic gradient step at x. The paper verifies experimentally that nonlinear neural nets are locally elastic through showing that an elasticity-motivated similarity score can perform clustering well.
[Pros]
The notion of local elasticity is interesting and has the potential of opening up lots of further directions. The way I understand it is to relate to memorization (as the authors have indeed discussed) --- I think “local elasticity” can be viewed as some sort of “local memorization ability”, in that the NN is able to change its prediction only in a small neighborhood of x---without affecting predictions at other remote x’s---after one SGD step on x. Conceptually this is something not covered by the existing narratives in deep learning theory, yet the phenomenon itself is quite convincing and could provide a new perspective into lots of things.
[Cons]
It feels like the experimental results in the present paper is a rather indirect evidence for the local elasticity -- that the similarity score coming from the elasticity works well for a downstream clustering task. Could there be some more direct evidence about the local elasticity? How would the elasticity compare on different architectures? In the present form the experiments perhaps at most says that the similarity score makes sense, not yet that a fully quantitative characterization of the local elasticity.
I’m also a little bit concerned about the fairness of the clustering experiment, in that the elasticity-motivated clustering algorithm utilizes an auxiliary dataset whereas simple baselines such as K-means and PCA K-means are not able to use that. Is there a way of modifying the K-means and PCA K-means so that they can also use this auxiliary dataset while still giving a sensible algorithm for the primary 2-class clustering task? |
iclr_2020_HJgepaNtDS | We undertake the problem of representation learning for time-series by considering a Group Transform approach. This framework allows us to, first, generalize classical time-frequency transformations such as the Wavelet Transform, and second, to enable the learnability of the representation. While the creation of the Wavelet Transform filter-bank relies on affine transformations of a mother filter, our approach allows for non-linear transformations. This is achieved by sampling a subset of invertible maps on R. The subset considered contains strictly increasing and continuous functions. The transformations induced by such maps enable us to span a larger class of signal representations, from wavelet to chirplet-like filters. We propose a parameterization of such a non-linear map such that its sampling can be optimized for a specific loss and signal. The Learnable Group Transform can thus be cast into a Deep Neural Network. The experiments on diverse time-series datasets demonstrate the expressivity of this framework, which competes with state-of-the-art performances. | A typical Wavelet Transform is built through the dilation and/or rotation of a mother wavelet, which can been viewed as a group action on a mother wavelet. This work proposes to extend this construction beyond the Euclidean group, and to supervisedly learn operators that will be applied on a mother wavelet. Competitive numerical performances are obtained.
Overall, I think that re-thinking the way a Wavelet Transform is designed, is an interesting direction of research, but I think some of the theoretical tools developed in this paper are not dedicated to achieve this purpose. In particular, the group/representation properties seem to not be used, and the authors could simply consider a specific subset of invertible mapping on $\mathbb{R}^2$ which would be applied on the mother wavelet and lead to a Wavelet Transform. In other words, the overall formulation could be simplified.
Pros:
- In general, the numerical experiments are at the level of the state of the art.
- Parametrizing a subset of the group of increasing function and its application to signal processing tools is novel, to my knowledge.
Cons:
- Some very relevant elements in the literature review are missing. Learning or using an underlying group of symmetry that will be combined with a deep neural network is not novel, cf: https://arxiv.org/abs/1601.04920 ; in particular for reducing the number of parameters, filters or samples: https://arxiv.org/abs/1809.06367 ; http://proceedings.mlr.press/v48/cohenc16.pdf ; https://arxiv.org/abs/1809.10200 ; https://arxiv.org/abs/1605.06644 - I think the authors should discuss at least one or two of those papers, if not all.
- The performance on the bird detection task is good but the improvement compared to other work is not clear, given that some supervision in the first layer is incorporated.
- Subsections 2.2 and 2.3 are difficult to parse because the authors introduce a lot of equations or notion that are not useful to understand their algorithm/method. The equation (6) seems wrong to me (one should consider t->s_i(-t) and not t->s_i(t) and b seems missing in the second line).
- Figure 2 is difficult to read because of the illustrative graphics. Maybe a block schema would be easier to parse.
- It seems to me that no-where the group properties are used, such as the stability to composition. In this paper, the authors simply try to parametrize a diffeomorphism to dilate the mother wavelet. From my understanding of 3.3, the subset of function used to approximate $G_{inc}$ do not form a subgroup as well, contrary to the Euclidean case, where for instance discrete rotations in the case of images are a finite group.
- In subsection 4.1(Table 1), a comparison with a wavelet transform followed by a linear operator is compared with the proposed method. I find this result surprising :
LGT/nLGT/cLGT/cnLGT and the WT are some linear methods whereas the STFT is non linear. As the WT should be unitary, if the linear classifier method is reasonably trained, then both methods should lead to the same result, except if the data are poorly conditioned. In which case, this experiment would not be meaningful. I think the authors should comment more this result because it is surprising.
- I slightly disagree with the sentence "in the case of WT, the precision in frequency degrades as the frequency increases". Actually, the heisenberg principle is optimally optimized by wavelets, meaning that the area of the frequency/spatial support on a spectrogram is constant. On the contrary, the STFT has a lack of localisation (and thus the "precision" is not constant along frequencies). Maybe this could be rephrased slightly.
- Given the filter learned in Figure 5, one can wonder if a foveal approach (i.e., Foveal Wavelets) could perform similarly? It would be interesting to display the littlewood-paley plot(i.e., the sum of the modulus of the filters in the Fourier domain) of this representation to understand the nature of this operator in the Fourier domain.
- I think group actions could be considered instead of representations: it would be simpler to understand for a potential reader.
Typos:
- abstract "in order to transform the mother .." > "in order to transform a mother.."
- page 7 "this variation is as not captured as well" > "this variation is not captured as well". |
iclr_2020_BJxWx0NYPr | Many real-world data sets are represented as graphs, such as citation links, social media, and biological interaction. The volatile graph structure makes it non-trivial to employ convolutional neural networks (CNN's) for graph data processing. Recently, graph attention network (GAT) has proven a promising framework by combining graph neural networks with attention mechanism, so as to achieve massage passing in graphs with arbitrary structures. However, the attention in GAT is computed mainly based on the similarity between the node content, while the structures of the graph remains largely unemployed (except in masking the attention out of one-hop neighbors). In this paper, we propose an "adaptive structural fingerprint" (ADSF) model to fully exploit both topological details of the graph and content features of the nodes. The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures. By doing this, structural interactions between the nodes can be inferred accurately, thus improving subsequent attention layer as well as the convergence of learning. Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to "cross-talk" with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. Encouraging performance is observed on a number of benchmark data sets in node classification. | ---- Problem setting and contribution summary ----
The paper considers the problem of graph node classification in a semi-supervised learning setting. When classifying a node, the decision is based on the node’s features, as well as by a weighted combination of its neighbors (where the weights are computed using a learnt attention mechanism). The authors extend upon the recent Graph Attention Networks (GAT) paper by proposing a different way of computing the attention over the neighboring nodes. This new attention mechanism takes into account not just their feature similarity, but also extra structure information, which also enables their method to attend not only over direct neighbors, but also up to k-hop neighbors.
---- Overall opinion ----
While I believe the general idea indeed has merit and empirically shows great promise, I believe the paper in its current state is not ready for publication. However, I believe that a more thorough revision can lead to an a publication with potential impact on the applications side.
---- Pros ----
1. The paper is easy to read.
2. I really appreciated the good visualizations (Figures 2,3,4) that indeed help in understanding the method.
3. Really good empirical results on the 3 datasets that were presented.
---- Major issues ----
1. Motivation:
I believe the paper is not well motivated from an applications perspective. In section 2.1., the authors did a good job explaining the limitations of current approaches on a generic graph structure, but this is only under the main assumption that a node should attend more to neighbors in its denser community than other neighbors that not connected so strongly (Fig 1). The issue that I have with this is that:
a) Why should we take this assumption for granted? What are some concrete practical node classification problems where it is indeed better if a node attends to its neighbors in this way (as opposed to the GAT approach)?
b) Even if the above is proven true, suppose node A in Fig. 1(a) attends with equal weights to all its 4 neighbors. That means node C (which is outside its densest community) gets 1/4, while the nodes inside the dense community get a total of ¾. That means node A puts most of its attention to the dense community anyway. In what conditions is it necessary to bias this attention even further?
2. Experiments:
While the reported accuracies for the three datasets look good and also the authors have provided a link to their code (great to see that!), I believe the experimental section is missing an important amount of details for reproducibility purposes and also for explaining how certain parameters have been chosen:
a) There are no details on the model size and training procedure (hidden units, optimizer, learning rate schedule).
b) Maybe I am missing this, but I don’t see any reference on what alpha and beta from equation (6) were used in the experiments.
c) What is the value of k in Table 2? How did you choose it? I believe Figure 5 shows test accuracies for different k, but I hope the authors did not choose k based on the test set performance.
d) How did you select the structural fingerprint to be 3?
e) “...optimizing c through the learning process also gives very similar choice” → how did you optimize c exactly?
f) Fig 5 a) Why does increasing the number of hops to 3 or 4 decrease the performance so much? Shouldn’t the attention weights learn to ignore the further neighbors, if they are not useful?
Another important question regarding experiments: since the ablation study shows that the optimal neighbor range is actually 2, a natural baseline to compare with would be something similar to GAT, where an attention weight is applied to all neighbors within two hops (basically skip the fingerprint step, and assume s_{ij} is 1 for all neighbors within 2 hops, and 0 otherwise).
Also regarding experiments, these 3 datasets, although common across many recent graph node classification papers, they are known to be quite limited (small in size and not very representative of real world). Since GAT is your main competitor, why not also show experiments on the PPI dataset they also test?
3. Writing quality:
While the language is clear and easy to follow, there are many grammatical mistakes throughout the paper (e.g. “benefitial”, subject-verb agreement).
---- More minor issues ----
a) Section 2.1: One could argue that GAT also contains longer range node dependencies through the node embeddings it learns. Since the node embeddings is trained through gradient descent, and at each iteration the embedding of a node changes according to its neighbors, you could say that information does get propagated from the neighbor’s neighbors.
b) Why do you need a LeakyRelu in Equation (5) ? Also, aren’t e_{ij} non-negative anyway (in which case LeakyRelu doesn’t change anything)?
c) Why would a Sigmoid be a good choice for alpha and beta in Eq. (6)?
d) Section 3: A bag of words is typically represented as a binary vector, which is also categorical.
e) Please use \citet when specifically referring to the authors of a paper as part of your sentence (e.g. “Following Velickovic et. al., 2017 we….” as opposed to “Following (Velickovic et al., 2017), we....”). |
iclr_2020_ByxY8CNtvr | Recent Transformer-based models such as Transformer-XL and BERT have achieved huge success on various natural language processing tasks. However, contextualized embeddings at the output layer of these powerful models tend to degenerate and occupy an anisotropic cone in the vector space, which is called the representation degeneration problem. In this paper, we propose a novel spectrum control approach to address this degeneration problem. The core idea of our method is to directly guide the spectra training of the output embedding matrix with a slow-decaying singular value prior distribution through a reparameterization framework. We show that our proposed method encourages isotropy of the learned word representations while maintains the modeling power of these contextual neural models. We further provide a theoretical analysis and insight on the benefit of modeling singular value distribution. We demonstrate that our spectrum control method outperforms the state-of-the-art Transformer-XL modeling for language model, and various Transformer-based models for machine translation, on common benchmark datasets for these tasks. | Summary: This paper deals with the representation degeneration problem in neural language generation, as some prior works have found that the singular value distribution of the (input-output-tied) word embedding matrix decays quickly. The authors proposed an approach that directly penalizes deviations of the SV distribution from the two prior distributions, as well as a few other auxiliary losses on the orthogonality of U and V (which are now learnable). The experiments were conducted on small and large scale language modeling datasets as well as the relatively small IWSLT 2014 De-En MT dataset.
Pros:
+ The paper is well-written with great clarity. The dimensionality of the involved matrices (and their decompositions) are clearly provided, and the approach is clearly described. The authors also did a great job providing the details of their experimental setup.
+ The experiments seem to show consistent improvements over the baseline methods (at least the ones listed by the authors) on a relatively extensive set of tasks (e.g., of both small and large scales, of two different NLP tasks). Via WT2 and WT103, the authors also showed that their method worked on both LSTM and Transformers (which it should, as the SVD on word embedding should be independent of the underlying architecture).
+ I think studying the expressivity of the output embedding matrix layer is a very interesting (and important) topic for NLP. (e.g., While models like BERT are widely used, the actual most frequently re-used module of BERT is its pre-trained word embeddings.)
---------------------------------
I have a few questions/comments on the work as well:
1) One of the things that is not clearly described in the paper is how the proposed spectrum control method was injected into training in practice. In Section 4.3 (when you do the theoretical analysis), you assumed "all the other parameters are fixed and well-optimized". Is that also what you did in training on WT2/WT103 (e.g., first pre-train a model such as Transformer-XL, and then fine-tune its embedding layer using the proposed decomposed method)?
2) How does the runtime and memory cost of your approach compare to the baselines? (e.g., you now need to compute $U^\top U$, which can also be prohibitively large when the vocabulary size is large; for instance, on the 1-Billion word dataset).
3) I remember the base Transformer-XL model did not use the adaptive embedding/softmax (although they set the `--adaptive` flag, they did use `--div_val 1`). How does your method work in the adaptive setting, where the embedding size $d$ of different words could be very different (e.g., depending on the word frequency)?
4) Regarding the theoretical analysis in Section 4.3. Why is the cross-entropy loss function (essentially NLL loss + softmax) bounded (as is required by Theorem 4.1)? Given a fixed ground-truth $y_i$, if the corresponding predicted likelihood is small (e.g., $\rightarrow 0$), won't the CE loss be very large? In that case, the generalization bound in Eq. (4.3) would be vacuous, as B would be very large too. Moreover, I'm also not fully convinced by what the theoretical analysis is trying to convey--- that your method "achieve a trade-off between the training loss and generalization error"--- but isn't that what (any) regularizations are designed for? The proved bound is no different in nature from any generalization bound with a regularizer (e.g., weight decay), and it does not necessarily reflect the usefulness of the proposed approach.
5) In experiments, you only showed the better performance of the exponential and the polynomial singular value decays (cf. beginning of Sec. 5). Could you show both? Which one is better (and on what task), and by how much? If one is to use your method, which decay scheme do you recommend?
6) Why is MLE-CosReg (Gao et al. 2019b) not compared to in the WikiText-103 and the machine translation task? As the MLE-CosReg approach only involves regularizing the cosine similarity via $\text{Sum}(\hat{W}\hat{W}^\top)$, it should computationally be even slightly cheaper than the proposed method (you need to compute $\mathbf{U}^\top \mathbf{U}$, which has the same complexity). They also tested on the larger-scale WMT En-De and De-En dataset (which contains 4.5M sentence pairs). Is there any reason that you chose IWSLT 2014 instead?
---------------------------------
Some issues that didn't impact the score:
7) It'd be useful to add labels to the x- and y-axis of the plots in Figure 1.
8) When implementing the method described in Section 4.2, did you explicitly sort the singular values in each iteration, or just set them to learnable parameters (along with learnable $\mathbf{U}, \mathbf{V}$) without sorting? If you do sort, do you also "sort" the columns of $\mathbf{U}$ and $\mathbf{V}$ (as I would expect a one-to-one mapping from $\sigma_i$ to $\mathbf{U}_i$, for instance)? If you don't sort, how do you make sure that $\sigma_i \geq \sigma_{i+1}$?
9) Why did you regularize $\mathbf{U}$ and $\mathbf{V}$ by both its Frobenius norm and its spectral norm? Does using only one of them compromise the performance?
---------------------------------
Overall, I find this work well-written and well-motivated. The experiments seem to show consistent improvement when using the approach. I vote for weak accept, but I also look forward to the author's response to the questions I raised above. |
iclr_2020_Skg9aAEKwH | We train embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator. We place a variety of obstacles in the environment for the prey to hide behind, and we only give the agents partial observations of their environment using an egocentric perspective. Although we train the model to play this game from scratch, experiments and visualizations suggest that the agent learns to predict its own visibility in the environment. Furthermore, we quantitatively analyze how agent weaknesses, such as slower speed, effect the learned policy. Our results suggest that, although agent weaknesses make the learning problem more challenging, they also cause more useful features to be learned. | # Review ICLR20, Visual Hide and Seek
This review is for the originally uploaded version of this article. Comments from other reviewers and revisions have deliberately not been taken into account. After publishing this review, this reviewer will participate in the forum discussion and help the authors improve the paper.
## Overall
**Summary**
The authors introduce a new RL environment and task, "Visual Hide and Seek", in which they analyze how the agent's learned visual representations are impacted by its speed, auxiliary rewards, and opponent behavior.
**Overall Opinion**
This paper presents a thorough analysis and great visualizations of agent behaviors and representations under different conditions. I wish more papers would put this much effort into analyzing their agents. I'd highly recommend this paper get accepted since I believe the analysis carried out here and the conclusions reached are quite novel and the paper is overall well-written.
However, at the same time, the work of [Baker et al., 2019][1] was published with significantly more fanfare. I hope their work does not overshadow this one since they are only related in the general task concept.
[1]: https://arxiv.org/abs/1909.07528
Some major issues I had with this work:
- In general, please run more random seeds. Just reporting on a single random seed is not enough, as per [Henderson et al., 2018][2].
- There are some sections of the paper where the order of paragraphs is confusing. You start the introduction by stating what you've done and letting the reader wonder "why?". The explanation is only given in the second paragraph. So I'd suggest rotating the second paragraph upwards before the first. Similarly, at the beginning of section 4, you just mention the results - this should either be shorter (1 sentence, as an overview of the work in this section) or moved to the end of that section.
- You're missing a section about future work and flaws/problems of your work at the very end (the latter if which should be in "Discussion"), which is common to include in ICLR publications.
[2]: https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16669/16677
Here are some minor...
## Specific comments and questions
### Abstract
all good
### Intro
- Fig.1: What is that white thing that the seeker/hider have on the capsule?
### Rel. Work
all good
### Method
- What's the speed (FPS) of that Unity engine? Why didn't you use Mujuco/(Py)Bullet/Gym-Miniworld?
- You need to add some measurements and units: the arena size doesn't have a unit, the size (diameter) of the hiders/seekers is unclear, turning left/right is unclear (how far left/right, after action-repeat)
- You mention any real-valued position to be valid - so the agents can step into obstacles? And how about through obstacles?
- "Affordance Learning" is usually not used for static environment geometry like obstacles (e.g. [Georgia-Tech course on "Human-Robot Interaction"][3])
- Why 4-layer CNNs? Why not 6 or 8 or a ResNet? Would you think the features would be stronger/weaker in an 8-layer CNN?
[3]: https://www.cc.gatech.edu/~athomaz/classes/CS8803-HRI-Spr08/MayaChandan/Site/Affordance_Learning.html
### Experiments
- 4.1 "... learned this play game" -> "... learned this game".
- 4.2 "mid-level features" -> what's that? The activation of the convolutional kernels after the second layer CNN? What's the dimension? And why did you pick the 2nd layer, not any of the other 3?
- Tab.3: This is averaged over how many frames of rollout?
- "... case can moves a lot faster." -> "... case can move a lot faster".
- Fig.2 is very interesting. Well done.
- Fig.3: the font is not consistent with other figures
- Fig.4: remove the blueish background to increase contrast. Increase the font size of the ticks on the left. Make the legend color boxes slightly bigger. Add more space or a visual divider between the different states - especially on the right side it's hard to make out where one stops and the next begins.
- Fig.4/5: This analysis is lovely and we need more of this in DRL.
- 4.4 in the text, you sometimes write "not S" and sometimes "¬S". Please change the "not s"
- Fig.5: (suggestion) Merge/sum the 2 columns (in both A/B merge the left and right plot into one by summing); subtract the random policy values as you did with Fig.4.
- "We summarize representative cases, and put the full results for all combinations in the Appendix" - no you didn't.
- Fig.6: What is going on in the left third of this diagram? What is this colorful mush? If this is by any chance indicating a change over time, do you maybe want to spread a single, very colorful plot of distance over time into multiple less colorful plots? At least add a legend, please. Also, I'd recommend smoothing (moving average or smoothing spline).
- Also maybe add reward over time plots, as is common in DRL, to show that your policies converged after 8 mil. steps.
### Conclusion
All good, save for the missing future work and critical analysis of your work.
### Appendix
all good |
iclr_2020_HkgH0TEYwH | Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets. Typically anomaly detection is treated as an unsupervised learning problem. In practice however, one may have-in addition to a large set of unlabeled samples-access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous. Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples. Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific. In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection. Using an information-theoretic perspective on anomaly detection, we derive a loss motivated by the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution. We demonstrate in extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data. | [Summary]
The paper proposes an abnormal detection (AD) framework under general settings where 1) unlabeled data, 2) labeled positive (normal) data, and 3) labeled negative (abnormal) data are available (with the last two optional), denoted as semi-supervised AD. Starting from the assumption that abnormal data are sampled from background unpredicted distribution, rather than “cluster” assumption, it is argued that conventional discriminative formulation is not applicable. Motivated by the recent deep AD methods (e.g., deep SVDD), the paper proposes to approach semi-supervised AD from the information theoretic perspective where 1) mutual information between raw data and learnt representation should be maximized (infomax principle), 2) entropy of labeled positive data should be minimized (“compactness” constraint), and 3) enrtropy of labeled negative data should be maximized to reflect the uncertainty assumption of anomaly. The solution is implemented by the encoder of a pre-trained autoencoder that is further fine tuned to enforce entropy assumption on all types of training data. Extensive experiments on benchmarks suggests promising results on the proposed framework versus other state-of-the-arts.
[Comments]
The paper is well written and easy to follow (the presentation is especially pleasant to read). The problem is well defined and of interest to the community under fairly general and practical conditions. Despite the fact that the implementation is only marginally tweaked from previous work (deep SVDD), the theoretical motivation, nevertheless, is sound and well justified, and the empirical evaluation is extensive to reveal the behaviors of the proposed method. It would be better if complexity analysis can also be provided for all concerning methods. Overall, the value of the paper is worth circulation in the community.
[Area to improve]
The manuscript could be further improved by exploring the training process more. In the current format, the solution follows the strategy of deep SVDD that learns the model in two separate stages: pre-training the autoencoder, and then fitting the encoder to enforce compactness and entropy minimization/maximization. What if these are implemented in an end-to-end fashion? Will this help to achieve a better result? |
iclr_2020_B1gZV1HYvS | In multi-agent systems, complex interacting behaviors arise due to heavy correlations among agents. However, prior works on modeling multi-agent interactions from demonstrations have largely been constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents' policies, which is able to recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. | This paper proposes to model interactions in a multi-agent system by considering correlated policies. In order to do so, the work modifies the GAIL framework to derive a learning objective. Similar to GAIL, the discriminator distinguishes between state, action, next state sequences but crucially the actions here are considered for all agents.
The paper is a natural extension of GAIL/MA-GAIL. I have two major points that need to be addressed.
1. The exposition and significance of some of the theoretical results is unclear.
- The non-correlated and correlated eqns in 2nd and 3rd line in eq. 8 are not equivalent in general, yet connected via an equality.
In particular, Proposition 2 considers an importance weighting procedure to reweight state, action, next state triplets. It is unclear how this resolves the shortcomings of pi_E^{-1} being inaccessible. Prop 2 shifts from pi_E^{-1} to pi^{-1} and hence, the expectations in Prop 2 and Eq. 11 are not equivalent.
- More importantly, how are the importance weights estimated in Eq. 12? The numerator requires pi_E^{-1}, which is not accessible. If the numerator and denominator are estimated separately, it becomes a chicken-and-egg problem since the denominator is itself intended to be an imitating the expert policy appearing in the numerator?
2. Missing related work
There is a huge body of missing work in multi-agent interactions modeling and generative modeling. [1, 2] consider modeling of agent interactions via imitation learning and a principled evaluation framework of generalization in the Markov games setting. By sharing parameters, they are also able to model correlations across agent policies and have strong results on generalization to cooperation/competition with unseen agents with similar policies (which wouldn't have been possible if correlations were not modeled). Similarly, [3, 4] are other similar works which consider modeling of other agent interactions/diverse behaviors via imitation style approaches. Finally, the idea of correcting for the mismatch in state, action, next state triplets in Proposition 2 has been considered for model-based off-policy evaluation in [5]. They proposed a likelihood-free method to estimate importance weights, which seems might be necessary for this task as well (re: qs. on how are importance weights estimated?).
Re:experiments. Results look good and convincing for most parts. I don't see much value of the qualitative evaluation in Figure 1. If the KL divergence is low, we can expect the marginals to be better estimated. Trying out various levels of generalization as proposed in [2] would significantly strengthen the paper.
Typos
sec 2.1 Transition dynamics should have range in R+
Proof of Prop 2. \mu instead of u
References:
[1] Learning Policy Representations in Multiagent Systems. ICML 2018.
[2] Evaluating Generalization in Multiagent Systems using Agent-Interaction Graphs. AAMAS 2018.
[3] Machine Theory of Mind. ICML 2018.
[4] Robust imitation of diverse behaviors. NeurIPS 2017.
[5] Bias Correction of Learned Generative Models using Likelihood-free Importance Weighting. NeurIPS 2019. |
iclr_2020_HylxE1HKwS | We address the challenging problem of efficient deep learning model deployment, where the goal is to design neural network architectures that can fit different hardware platform constraints. Most of the traditional approaches either manually design or use Neural Architecture Search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally expensive and unscalable. Our key idea is to decouple model training from architecture search to save the cost. To this end, we propose to train a once-for-all network (OFA) that supports diverse architectural settings (depth, width, kernel size, and resolution). Given a deployment scenario, we can then quickly get a specialized sub-network by selecting from the OFA network without additional training. To prevent interference between many sub-networks during training, we also propose a novel progressive shrinking algorithm, which can train a surprisingly large number of sub-networks (> 10 19 ) simultaneously. Extensive experiments on various hardware platforms (CPU, GPU, mCPU, mGPU, FPGA accelerator) show that OFA consistently achieves the same level (or better) ImageNet accuracy than SOTA NAS methods while reducing orders of magnitude GPU hours and CO 2 emission than NAS. In particular, OFA requires 16× fewer GPU hours than ProxylessNAS, 19× fewer GPU hours than FBNet and 1,300× fewer GPU hours than MnasNet under 40 deployment scenarios. | In this papers, the authors learn a Once-for-all net. This starts as a big neural network which is trained normally (albeit with input images of different resolutions). It is then fine-tuned while sampling sub-networks with progressively smaller kernels, then lower depth, then width (while still sampling larger networks occasionally, as it reads). This results in a network from which one can extract sub-networks for various resource constraints (latency, memory etc.) that perform well without a need for retraining.
This paper is well written, and the results are very good. However there are serious problems that need addressing.
The method as described *is not reproducible*. The scheduling of sampling subnetworks is alluded to on page 4, and that's it. It is essential that the authors include their exact subnet sampling schedule e.g. as pseudocode with hyperparameters. There is no point doing good work if other researchers cannot build off it.
On another reproducibility note, as far as I can tell, the original model isn't given. There would be no harm in adding this to the appendix.
Figure 1 is misleading, as we don't find out until later in the paper that Once For All #25 means that each of these points was finetuned for a further 25 epochs (which on ImageNet is non-trivial). This defeats the narrative of the paper (once-for-all plus some fine-tuning isn't exactly once-for-all).
Is there a reason why the progressive shrinking goes resolution->kernel->depth->width? Was this just the permutation that worked best? I would be curious as to why this is.
For elastic width, I wasn't sure why the "channel sorting operation preserves the accuracy of larger sub-networks". Could you please elaborate?
Kudos on adding CO2 emissions in Table 2, I hope this gets reported more often.
In the introduction, the authors talk about iPhones and then the hardware considered is Samsung and Google. A minor note, but it seems inconsistent.
Another minor note, in Table 2, (Strubell et al) should be out of the brackets, as it is part of the sentence.
Given that there are 10^19 subnetworks that can be sampled, it would be nice to see more than 3-4 appear on a plot. This makes it seem like they might have been cherry-picked. Sampling a few 100/1000 subnets and producing some Pareto curves would be both interesting and insightful.
Pros
-------
- Good results
- Well written
- Neat idea
Cons
-------
- Training details are obfuscated. This paper should not be accepted without them.
- Very few subnetworks of the vast quantity that exist are observed.
In conclusion, I am giving this paper a weak reject, as it is currently impossible to reproduce, and as such, is of no use to the community. If the authors remedy this I will gladly raise my score. |
iclr_2020_HJlfuTEtvB | Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs. Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops. In this paper, we present Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants directly from program execution traces. Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories (SMT) for loop invariants from program execution traces. We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset. CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset. Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40× faster than existing approaches. We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset. | Summary:
This paper introduces a novel way to find loop invariants for a
program. The loop invariants are expressed as SMT formulas. A
continuous relaxation of an SMT solver is introduces mapping
every SMT formula onto a truth value \in [0, 1]. This relaxation
called a continuous semantic mapping is decided such that every
true formula has a higher value than every false formula. This
allows an invariant to be learned.
Novelty and Significance:
This work is interesting and although authors do seem to be
outside of the community, I firmly believe is appropriate for
ICLR. If the claims made by the paper are true they constitute
a significant contribution to the field of program synthesis
and program analysis.
Technical Quality:
The evaluation was fairly thorough, but the paper can be
strengthened massively with a few small changes and additions.
It might be more helpful if there was a sense of how many problems
none of the systems can do and how complicated of a program can
this system extract a loop invariant from. Why were was it these
particular 12 that work? Are there examples that don't?
I don't know why all this time is spent on t-norms when behavior
on them is fairly similar and the simplest norm works best.
Lots of details are missing in this paper. How much training data
was generated? How long did it take? Does training data need to
be generated for each example? If so is that included in the
runtime for Figure 3?
The paper talks about neural architecture, but all I see is
effectively a curve-fitting task for some template. This feels
different from the code2inv paper where a program can be fed
into the system and the pretrained model emits the invariant.
Clarity:
Not enough of this paper concentrates on the novel aspects of the
approach. Section 5 discusses template generation, but not in
enough detail that I would be able to replicate this work. I
could also not find enough details in the Appendix.
I don't know why vital page space is spent on defining completeness
and soundness.
Possibly related work as relaxations to SAT/SMT solvers do exist in the literature.
Guiding High-Performance SAT Solvers with Unsat-Core Predictions
https://arxiv.org/abs/1903.04671
Learning to Solve SMT Formulas
https://papers.nips.cc/paper/8233-learning-to-solve-smt-formulas.pdf
Notes:
You cite Si et al. for LoopInvGen when you should be citing Pathi
and Millstein in the third paragraph on page 2. |
iclr_2020_H1lefTEKDS | Model-based reinforcement learning (MBRL) is widely seen as having the potential to be significantly more sample efficient than model-free RL. However, research in model-based RL has not been very standardized. It is fairly common for authors to experiment with self-designed environments, and there are several separate lines of research, which are sometimes closed-sourced or not reproducible. Accordingly, it is an open question how these various existing algorithms perform relative to each other. To facilitate research in MBRL, in this paper we gather a wide collection of MBRL algorithms and propose over 18 benchmarking environments specially designed for MBRL. We benchmark these algorithms with unified problem settings, including noisy environments. Beyond cataloguing performance, we explore and unify the underlying algorithmic differences across MBRL algorithms. We characterize three key research challenges for future MBRL research: the dynamics bottleneck, the planning horizon dilemma, and the early-termination dilemma. Finally, to facilitate future research on MBRL, we open-source our benchmark 1 . | This paper performs an empirical comparison of a number of model-based RL (MBRL) algorithms over 18 benchmarking environments. The authors propose a set of common challenges for MBRL algorithms.
I appreciate the effort put into this evaluation and I do think it helps the community gain a better understanding of these types of algorithms. My main issue with the paper is that I don't find the evaluation thorough enough (for instance, no tabular environments are evaluated) and the writing still needs quite a bit of work. I encourage the authors to continue this line of work and improve on what they have for a future submission!
More detailed comments:
- Intro: "model-free algorithms ... high sample complexity limits largely their application to simulated domains." I'm not sure this is a fair criticism. Model-based are also mostly run on simulations, so sample efficiency is not necessarily the cause model-free are only run on simulations. Further, this statement kind of goes against your paper, since all your evaluations are on simulations!
- Intro: "2) Planning horizon dilemma: ... increasing planning horizon... can result in performance drops due to the curse of dimensionality..." and "similar performance gain[sic] are not yet observed in MBRL algorithms, which limits their effectiveness in complex environments" goes against the point you made earlier on in the intro about sample efficiency.
- Preliminaries: "In stochastic settings, it is common to represent the dynamics with a Gaussian distribution", this is only for continuous states. It would be nice if you could evaluate tabular environments as well.
- Sec 4.1: "we modify the reward funciton so that the gradient... exists..." which environments were modified and how did they have to be modified?
- Sec 4.1: You discuss early termination but have not defined what exactly you mean by it.
- Fig 1: 12 curves is still a lot and really hard to make much sense of. A lot of the colors are very similar.
- Sec 4.1: "it takes an impractically long time to train for 1 million time-steps for some of the MBRL algorithms" why?
- Table 1 is a *lot* of numbers and colors and is really hard to make much sense of. There are also so many acronyms on the LHS it's difficult to keep track.
- Table 2: What about memory usage?
- Sec 4.4: "Due to the limited exploration in baseline, the performance is sometimes increased after adding noise that encourages exploration." Why does this noise not help exploration in the baselines?
- Sec 4.5: "This points out that when learning models more data does not result in better performance." This seems like it's closely correlated with the particular form chosen for the model parameterization more than anything.
- Fig 3: y-axis says "Relative performance", relative to what?
- Sec 4.7: "Early termination...is a standard technique... to prevent the agent from visiting unpromising states or damaging states for real robots." I've never seen this used as a justification for this.
- Sec 4.7: "Table 1, indicates that early termination does in fact decrease the performance for MBRL algorithms", and then you say "We argue that to perform efficient learning in complex environments... early termination is almost necessary." Those two statements contradict each other.
Minor comments to improve writing:
- When using citations as nouns, use \citep so you get "success in areas including robotics (Lillicrap et al., 2015)" as opposed to "success in areas including robotics Lillicrap et al. (2015)" (the latter is what you have all throughout the paper).
- Sec 3.1: s/This class of algorithms learn policies/These class of algorithms learn policies/ |
iclr_2020_r1lnxTEYPS | Recent work has shown that deep generative models can assign higher likelihood to out-of-distribution data sets than to their training data (Nalisnick et al., 2019;Choi et al., 2019). We posit that this phenomenon is caused by a mismatch between the model's typical set and its areas of high probability density. In-distribution inputs should reside in the former but not necessarily in the latter, as previous work has presumed (Bishop, 1994). To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods. The test is model agnostic and widely applicable, only requiring that the likelihood can be computed or closely approximated. We report experiments showing that our procedure can successfully detect the out-of-distribution sets in several of the challenging cases reported by Nalisnick et al. (2019). | Thanks for the authors for your detailed reviews.
My major concerns about the proposed method are whether "the typicality set" could be faithfully applied in the small data regime. The authors point me to the interesting Figure 4, which shows that it basically achieves converged performance when $m = 50$ or smaller numbers for some problems. I think this experiment is a strong support for the proposed method.
However, I don't agree that the M=1 Gaussian case acts as a strong support for the method. As I said, for some other wired distribution, it is difficult to interpret what the M=1 Typicality set becomes.
The authors also clarify the difference between different baselines.
Overall, I will increase my score to "Weak Accept".
##########################
Recent works have shown that out-of-distribution samples can have higher likelihoods than in-distribution samples for some generative models. To explain this phenomenon and to tackle the problem for OOD detection, this paper adopts "typical sets" for identifying in-distribution samples. Specifically, a "typical set" is a set of examples whose expected log likelihood approximate the model's entropy. For a Gaussian distribution, the paper finds that a single point typical set locates exactly in the \sqrt{d} radius, which is usually favored over the high-likelihood origin. Then the paper uses the "typical set" to perform OOD for a batch of examples. Empirically they demonstrate competitive performance over MNIST and natural image tasks.
Typical set seems natural for out-of-distribution detection. An important property is that, if one draws a large number of independent samples from the distribution, it is very likely that these samples belong to the typical set (basically Theorem 2.1). However, for small n, this property doesn't hold anymore, which leaves here a questionmark whether "Typical set" can be used for OOD detection in small n regime. As the author argues, for Gaussian distribution when n=1 the typical locations are those \sqrt{d} radius points. But this doesn't justify the "Typical set". If the distribution is some non-Gaussian wired distribution, the typical locations doesn't seem to make sense at all.
Following the previous argument above, the Typical set method requires to perform OOD for a batch of examples. In contrast, the Annulus method can be directly applied to one single test example.
Empirically, the Typically set doesn't demonstrate obvious advantages compared to the baselines. For both MNIST and natural image tasks, it seems that all methods behave similarly. For comparing such big tables, I would recommend adding a column showing the average ranks among all methods. Beyond that, standard OOD tasks usually evaluate methods using AUROC and AUPR (Hendrycks and Gimpel, 2017). Is it possible to also include such metrics ?
Theorem 2.1 is confusing. It is beneficial to define what P is, and verbally state what the theorem conveys. |
iclr_2020_HJe5_6VKwS | Adversarial perturbations cause a shift in the salient features of an image, which often results in misclassification. Previous work has suggested that these salient features could be used as a defense, arguing that with saliency tools we could successfully detect adversarial examples. While the idea itself is appealing, we show that prior work which used gradient-based saliency tools is ineffective as an adversarial defense -it fails to beat a simple baseline which uses the same model but with the saliency map removed. To remedy this, we demonstrate that learnt saliency models can capture the shifts in saliency due to adversarial perturbations, while also having a low computational cost. This allows saliency models to be used effectively as a real-time defense. Further, using the learnt saliency model, we propose a novel defense: a CNN that distinguishes between adversarial images and natural images using salient pixels as its input. On MNIST, CIFAR-10, and ASSIRA, our defense improves on using the saliency map alone, and can detect various adversarial attacks. Lastly, we show that even when trained on weak defenses, we can detect adversarial images generated by strong attacks such as C&W and DeepFool. | The paper studies methods for detecting adversarial examples using saliency maps. The authors propose using the method of Dabkowski and Gal (2017) to generate saliency maps and then train a classifier on these maps (or their dot product with the input image) to distinguish natural from adversarial examples. They perform experiments evaluating the white-box and black-box robustness of their detection scheme.
From a technical perspective, the contribution of the paper is rather incremental. The detection of adversarial examples by training a classifier on saliency maps has already been studied in prior work. The only modification proposed in this work is using an (existing) alternative method for producing the saliency maps and utilizing the dot product of maps with images.
From a conceptual perspective, the impact of detecting specific adversarial attacks is not clear. In a realistic setting, an adversary could use a very different attack or even utilize a different set of transformations (e.g. image rotations). Thus, in order to demonstrate the utility of their method in a black-box scenario, the authors would need to evaluate the defense in a variety of different scenarios. At the very least, they should consider generalization to difference attacks (e.g., train against FGSM and BIM, and test against DF).
Moreover, the robustness against white-box adversaries is not sufficiently studied. Firstly, the robustness of the non-adversarially trained detector is suspiciously high. There is little reason to expect that a composition of two neural networks (the saliency map methods and the classifier) would be non-trivially robust. The authors should consider alternative attacks perhaps using more iterations with a smaller step size. Secondly, after adversarial training, only the robustness against the same attack is considered. In order to argue about white-box robustness, the authors would need to evaluate against a variety of diverse adversaries.
Overall, the technical and conceptual contribution of this paper is insufficient for publication at ICLR, even ignoring the concerns about its experimental evaluation. |
iclr_2020_r1xPh2VtPB | Partially Observable Markov Decision Processes (POMDPs) are popular and flexible models for real-world decision-making applications that demand the information from past observations to make optimal decisions. Standard reinforcement learning algorithms for solving Markov Decision Processes (MDP) tasks are not applicable, as they cannot infer the unobserved states. In this paper, we propose a novel algorithm for POMDPs, named sequential variational soft Q-learning networks (SVQNs), which formalizes the inference of hidden states and maximum entropy reinforcement learning (MERL) under a unified graphical model and optimizes the two modules jointly. We further design a deep recurrent neural network to reduce the computational complexity of the algorithm. Experimental results show that SVQNs can utilize past information to help decision making for efficient inference, and outperforms other baselines on several challenging tasks. Our ablation study shows that SVQNs have the generalization ability over time and are robust to the disturbance of the observation. | This paper proposes a new sequential model-free Q-learning methodology for POMDPs that relies on variational autoencoders to represent the hidden state. The approach is generic, well-motivated and has clear applicability in the presence of partial observability. The idea is to create a joint model for optimizing the hidden-state inference and planning jointly. For that reason variational inference is used to optimize the ELBO objective in this particular setting. All this is combined with a recurrent architecture that makes the whole process feasible and efficient.
The work is novel and it comes with the theoretical derivation of a variational lower bound for POMDPs in general. This intuition is exploited to create a VAE based recurrent architecture. One motivation comes from maximal entropy reinforcement learning (MERL), but which has the ad hoc objective of maximizing the policy entropy. On the other hand SVQN optimizes both a variational approximation of the policy and that of the hidden state. Here the rest terms of the ELBO objective can be approximated generatively and some of them are conditioned on the previous state which calls for a recurrent architecture. The other parts are modeled by a VAE.
The paper also explores two different recurrent models in this context: GRU and LSTM are both evaluated. Besides the nice theoretical derivation the paper presents compelling evidence by comparing this approach to competing approaches on four games of the flickering ATARI benchmark and outperforming the baselines significantly. Also both the GRU and LSTM version outperforms the baseline methods on various tasks of the VIZDoom benchmark as well.
In general, I find that this well written paper presents a significant progress in modelling POMDPS in a model-free manner with nice theoretical justification and compelling empirical evidence. |
iclr_2020_HygcdeBFvr | Generative models for singing voice have been mostly concerned with the task of "singing voice synthesis," i.e., to produce singing voice waveforms given musical scores and text lyrics. In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time. In particular, we propose three either unconditioned or weakly conditioned singing voice generation schemes. We outline the associated challenges and propose a pipeline to tackle these new tasks. This involves the development of source separation and transcription models for data preparation, adversarial networks for audio generation, and customized metrics for evaluation. | This paper claims to be the first to tackle unconditional singing voice generation. It is noted that previous singing voice generation approaches leverage explicit pitch information (either of an accompaniment via a score or for the voice itself), and/or specified lyrics the voice should sing. The authors first create their own dataset of singing voice data with accompaniments, then use a GAN to generate singing voice waveforms in three different settings:
1) Free singer - only noise as input, completely unconditional singing sampling
2) Accompanied singer - Providing the accompaniment *waveform* (not symbolic data like a score - the model needs to learn how to transcribe to use this information) as a condition for the singing voice
3) Solo singer - The same setting as 1 but the model first generates an accompaniment then, from that, generates singing voice
Firstly, the authors have done a lot of work - first making their own data, then designing their tasks and evaluating them. The motivation is slightly lacking - it is not clear why we are interested in these three task settings i.e. what we will learn from a difference in their performance, and there is a lack of discussion about which setting makes for better singing voice generation. Also, there is no comparison with other methods: whilst score data is not available it could be estimated, then used for existing models, providing a nice baseline e.g. first a score is extracted with a state of the art AMT method, then a state of the art score to singing voice generation method could be used.
There are existing datasets of clean singing voice and accompaniment, for example MIR-1k (unfortunately I think iKala, another dataset, is now unavailable). It is true that this dataset is small in comparison to the training data the authors generate, but it will certainly be cleaner. I would have liked to see an evaluation performed on this data as opposed to another dataset which was the result of source separation (the authors generate a held out test set on Jazz from Jamendo, on which they perform singing voice separation).
I also had questions about the training data - there is very little information about it included other than it is in-house and covers diverse musical genres (page 6 under 4.1), a second set of 4.5 hours of solo piano, and a third set (?) of jazz singers. This was a bit confusing and could do with clarification. At minimum, I would like to know what genre we are restricting ourselves to - is everything Jazz? Are the accompaniments exclusively piano (it's alluded that the answer to this is no, but it's not clear to me)? Is there any difference between the training and test domain.
On page 6, second to last paragraph when discussing the validation set, I would like the sampling method to be specified - it makes a difference whether the same piece of music will be contained within both the training and validation split, or whether the source piece (from which the 10 second clips are extracted) are in separate splits <- I'd recommend that setting.
The data used to train the model will greatly affect my qualitative assessment of the provided audio samples so, without a clear statement on the training data used, I can't really assess this.
However, with respect to the provided audio samples, I'd first note that these are explicitly specified as randomly sampled, and not cherry picked, which is great, thank you. However, whilst I would admit that the domain is different, when the singing samples are compared with the piano generation unconditional samples of MelNet (https://audio-samples.github.io/#section-3), which I would argue is just as hard to make harmonically valid, they are not as harmonically consistent, even when an accompaniment has been provided. However, samples do sound like human voice, and the pitch is relatively good. The words are unintelligible, but this is explicitly specified as out of scope for this paper, and I agree that this is much harder to achieve.
As an aside, MelNet is not cited in this paper and, given the similarity and relevance, I think it probably should be - https://arxiv.org/abs/1906.01083. It was published this year however so it would be a little harsh to expect it to be there. I would invite the authors to rebut this claim if they think the methods are not comparable.
My main criticism is in relation to the evaluation. For Table 2, without a baseline or the raw data (which would have required no further effort) included in the MOS study, it's very difficult to judge success. If the authors think that comparison with raw data is unfair (as it is an embryonic task) they could include a model which has an unfair advantage from the literature - e.g. uses extracted score information.
For Table 1, I appreciate the effort that went into the design of 'Vocalness' and 'Matchness' which are 'Inception Score' type metrics leaning on other learned models to return scores. I would like to see discussion which explains the differences in scores for the different model settings (there is a short sentence at the bottom of page 7, but nothing on vocalness).
In summary - this is a hard problem and the authors are the first to tackle it. The different approaches to solve the problem are not well motivated. However, the models are detailed, and well explained. Code is even provided, but data for training is not. If the authors were able to compare with a baseline (like that I describe above), it would go a long way to convincing me that this was good work. As it stands, Tables 1 and 2, and the provided audio samples have no context, so I cant make a conclusion. If this issue and motivation was addressed I would likely vote to accept the paper.
Things to improve the paper that did not impact the score:
1. p2 "we hardly provide any labelled data" specify whether you do or not (I think it's entirely unsupervised since you extract chord progressions and pitch curves using learned models...)
2. p2 "...may suffer from the artifact" -> the artefacts
3. p2 "for the scenario addressed by the accompanied singer" a bit clumsy, may be worth naming your tasks 1, 2 and 3 such that you can easily refer to them
4. p2 "We investigate using conditional GAN ... to address this issue" - which issue do you mean? If it is the issue specified at the top of the paragraph, i.e. that there are many valid melodies for a given harmony (no single ground truth), I don't think using a GAN is a *solution* to this per se. It is a valid model to use, and the solution would be enough varied data (and evaluation to show you're covering your data space and haven't collapsed to a few modes)
5. p2 "the is no established ways" -> there are no established ways
6. p3 "Discriminators in GAN" -> in the GAN
7. p6 "piano playing audio on our own..." -> piano playing on its own (or even just rephrase the sentence - collect 4.5 hours of audio of solo piano)
8. p7 "We apply source separation to the audios divide them into ..." -> we apply source separation to the audio data then divide each track into 20 second...
9. p7 If your piano transcription model was worse than Hawthorne, why didn't you use it? It would have been fine to say you can't reproduce their model if it is not available, but instead you say that 'according to out observation [it] is strong enough' which comes across quite weakly.
10. p8 "in a quiet environment with proper microphone volume" -> headphone volume?
11. p8 "improv" - I think this sentence trailed off prematurely! |
iclr_2020_rJeW1yHYwH | Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes while learning temporal patterns. The node embeddings, which become functions of time under the temporal setting, should capture both static node features and evolving topological structures. Moreover, node and topological features may exhibit temporal patterns that are informative for prediction, of which the temporal node embeddings should also be aware. We propose the temporal graph attention (TGAT) layer to effectively aggregate temporal-topological neighborhood features as well as learning time-feature interactions. For TGAT, we use the self-attention mechanism as the building block and develop the novel functional time encoding technique based on the classical Bochner's theorem from harmonic alaysis. By stacking TGAT layers, the network learns node embeddings as functions of time and can inductively infer embeddings for both new and observed nodes whenever the graph evolves. The proposed approach handles both node classification and link prediction task, and can be naturally extended to aggregate edge features. We evaluate our method with transductive and inductive tasks under temporal setting with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-theart baselines and prior temporal graph embedding approaches. | Summary: This paper addresses the problem of representation learning for temporal graphs. That is, graphs where the topology can evolve over time. The contribution is a temporal graph attention (TGAT) layer aims to exploit learned temporal dynamics of graph evolution in tasks such as node classification and link prediction. This TGAT layer can work in an inductive manner unlike much prior work which is restricted to the transduction setting. Specifically, a temporal-kernel is introduced to generate time-related features, and incorporated into the self-attention mechanism. The results on some standard and new graph-structured benchmarks show improved performance vs a variety of baselines in both transduction and inductive settings.
Pros:
+ Dynamic graphs are an important but challenging data structure for many problems. Improved methods in this area are welcome.
+ Dealing with the inductive setting is an important advantage.
+ Clear performance improvements on prior state of the art is visible in both transductive+inductive settings and node+edge related tasks.
Cons+Questions:
1. Technical significance: Some theory is presented to underpin the approach, but in practice it seems to involve concatenating or adding temporal kernels element-wise to the features already used by GAT. In terms of implementation the concatenation in Eq 6 seems to be the only major change to GAT. I’m not sure if this is a major advance.
2. Insight. The presented method apparently improves on prior work by learning something about temporal evolution and exploiting it in graph-prediction tasks. But it's currently rather black-box. It would be better if some insight could be extracted about *what* this actually learns. What kind of temporal trends exist in the data that this method has learned? And how are they exploited in by the prediction tasks?
3. Writing. The English is rather flaky throughout. One particular recurring frustration is the use of the term “architect” which seems wrong. Probably “architecture” is the correct alternative.
4. Clarity of explanation. The paper is rather hard to follow and ambiguous. A few specific things that are not explained so well:
4.1. Eq 1+2 is not a sufficiently clear and self-contained recap of prior work.
4.2. Symbol d_T used at the start of Sec 3.1 seems to be used without prior definition making it hard to connect to previous Eq1+2.
4.3 The claim made about alternative approaches (Pg4) “Reparameterization is only applicable to local-scale distribution family, which is not rich enough”. Seems both too vague and unjustified.
4.4 The relationship between $t_i$ and the neighbours of the target node in Eq. 6 is not very clear. |
iclr_2020_HkxAS6VFDB | Pruning and quantization are typical approaches to reduce the computational cost of convolutional neural network (CNN) inference. Although the idea of combining both approaches seems natural, it is surprisingly difficult to determine the effects of the combination without measuring performance on the specific hardware that the user will use. This is because the benefits of pruning and quantization strongly depend on the hardware architecture where the model is executed. For example, a CPU-like architecture with no parallelization may fully exploit the reduction of computations by unstructured pruning to improve speed, but a GPUlike massive parallel architecture would not. Further, there have been emerging proposals of novel hardware architectures, such as those supporting variable bitprecision quantization. From an engineering viewpoint, optimization for each hardware architecture is useful and important in practice, but this is in essence a brute-force approach. Therefore, in this paper, we first propose a hardwareagnostic metric for measuring computational costs. Using the proposed metric, we demonstrate that Pareto-optimal performance, where the best accuracy is obtained at a given computational cost, is achieved when a slim model with fewer parameters is moderately quantized rather than a fat model with a huge number of parameters is quantized to extremely low bit precision, such as binary or ternary. Furthermore, we empirically find a possible quantitative relation between the proposed metric and the signal-to-noise ratio during stochastic gradient descent (SGD) training, by which information obtained during SGD training provides an optimal policy for quantization and pruning. We show the Pareto frontier is improved by 4× in a post-training quantization scenario based on these findings. These findings not only improve the Pareto frontier for accuracy versus computational cost, but also provide new insights into deep neural networks. | The paper proposes a new metric to evaluate both the amount of pruning and quantization. This metric is agnostic to the hardware architecture and is simply obtained by computing the Frobenius norm of some point-wise transformation of the quantized weights. They first show empirically that this Evaluation metric is correlated with the validation accuracy. Then use this metric to provide some general rules for pruning/quantizing to preserve the highest validation accuracy. Finally, they derive a strategy to perform pruning by monitory the signal to noise ratio during training and show experimentally that such method performs better than competing ones.
Pros: - Extensive experiments were performed to test the methods, and the results seem promising.
Cons: - The paper is not very clear, and the structure is somehow confusing.
- It is not easy at first to understand the experimental setup and requires to make a lot of guesses in my opinion.
- The paper didn't motivate properly the use of a hardware-agnostic metric in the context of the quantization and pruning. Isn't the ultimate goal of pruning/quantization is to optimize the run time/energy consumption of the specific device with the least compromise on the accuracy?
I feel that the paper currently jumps between very different ideas:
- Evolution of the proposed metric during training: 2.3 and 2.5. While the 2.3, the take-home message is relatively clear: the ESN is correlated with the validation accuracy, I don't fully get the point of section 2.5: It suggests that the optimizer does some sort of pruning just by choosing a higher learning rate.
- Finding an optimal strategy for pruning/quantizing a network: 2.4 and 2.6. Those two sections are relatively clear, although I have some questions about the experiments.
- Developing a new strategy based on the proposed metric to quantize and prune a network in a Pareto optimal sense: This is briefly and not very well explained in section 2.7, which sends back to 2.3, but it is hard to understand how it is exactly done. It seems that section 3 provides some empirical evidence supporting this strategy, but the description of the method is hidden in the experimental details.
Some questions:
- In figure 3, the blue dots represent validation vs ESN at each training iteration? What about the red plot, is it obtained by quantization of the parameters at different stages of training, or is it using the final parameters? Which equation was used to compute the red curve (2) or (4)? How much quantization was performed? If the quantization was chosen to match the level of noise then it seems natural to expect such behavior in figure 3.
- In figure 4, how much pruning was performed for each network and was it the same quantization? In other words how each point in the plot was obtained? The authors come to the conclusion that one should 'prune until the limit of the desired accuracy and then quantize', but it is hard for me to reach the same conclusions as I don't see the separate effect of pruning and quantization in those figures. Or maybe pruning is implicitly done by choosing a small network? In this case, it makes more sense, but still, some clarifications are needed.
- Which equation for the ESN was used to produce figure 5? Equation (2) or (4)?
- What is the Pareto frontier? I think it is worth first introducing this concept and describing more precisely how those curves are obtained. For someone who is not very familiar with these concepts, which is my case, it makes the reading very hard.
- How was the number of pruned filters computed in figure 5 (right)? I don't expect the solutions to be sparse during training, especially that no sparsity constraint was imposed, or was it?
-------------------------------------------------------------------------------
Revision:
Thank you for all the clarifications and the effort to make provide a clearer version of the paper.
Regarding section 2.7: ESNa FOR QUANTIZATION: Would it make sense to include the paragraph 2.7 at the end of 2.3, since it related to it and doesn't seem to require any of the intermediate subsections.
response to Comment 3: Unfortunately, I'm not convinced by the explanation about the effect of the lr on sparsity. The decay coefficient controls the saparsity indeed, but not the lr. That is because unlike the lr, the decay coefficient defines the cost functions to be optimized: L+dc ||W||^2, while the lr corresponds simply to the discretization of some gradient flow. For instance, in a deterministic and convex setting, the solution that is obtained would be the same, regardless of the chosen lr ( provided the lr is smal enough so that the algorithm converges) see for instance [1]. In a non-convex and stochastic setting do the authors have a particular reference in mind? I'm not aware of such behavior. I would expect a similar sparsity if dc is kept fixed and only lr changed. Is it likely that with smaller lr, the algorithm just didn't have time to converge? This would explain why the obtained solutions were less sparse.
response to answer 5: it is indeed well known that L1 norm induces sparsity, however l2 doesn't, it just encourages the weights to be smaller. In the optimization litterature sparsity of x% means x% of the parameters are exactly 0. This is achieved with l1 norm, however l2 norm would only enforce that the coefficients are small but not necessarily 0 (see [1])
[1]: ROBERT TIBSHIRANI, Regression Shrinkage and Selection via the Lasso.
Although the paper improved in terms of clarifications and experimental details, I still think it will benefit from additional work on careful interpretation of the results. |
iclr_2020_rkxtNaNKwr | Many cooperative multiagent reinforcement learning environments provide agents with a sparse team-based reward, as well as a dense agent-specific reward that incentivizes learning basic skills. Training policies solely on the team-based reward is often difficult due to its sparsity. Also, relying solely on the agent-specific reward is sub-optimal because it usually does not capture the team coordination objective. A common approach is to use reward shaping to construct a proxy reward by combining the individual rewards. However, this requires manual tuning for each environment. We introduce Multiagent Evolutionary Reinforcement Learning (MERL), a split-level training platform that handles the two objectives separately through two optimization processes. An evolutionary algorithm maximizes the sparse team-based objective through neuroevolution on a population of teams. Concurrently, a gradient-based optimizer trains policies to only maximize the dense agent-specific rewards. The gradient-based policies are periodically added to the evolutionary population as a way of information transfer between the two optimization processes. This enables the evolutionary algorithm to use skills learned via the agent-specific rewards toward optimizing the global objective. Results demonstrate that MERL significantly outperforms state-of-the-art methods, such as MADDPG, on a number of difficult coordination benchmarks. | This paper proposes an integration of neuroevolution and gradient-based learning for reinforcement learning applications. The evolutionary algorithm focuses on sparse reward and multiagent/team optimization, while the gradient-based learning is used to inject selectively improved genotypes in the population.
This work addresses a very hot topic, i.e. the integration of NE and DRL, and the proposed method offers the positive side of both without introducing major downsides. The presented results come from a relatively simple but useful multiagent benchmark which has broad adoption. The paper is well written, presents several contributions that can be extended and ported to other work, and the results are statistically significant.
There is one notable piece missing which forces me to bridle my enthusiasm: a discussion of the genotype and of its interpretation into the network phenotype. The form taken by the actual agent is not explicitly stated; following the adoption of TD3 I would expect a policy and two critics, for a grand total of three neural networks, but this remains unverified. And if each agent is composed of three neural networks, and each individual represents a team, does this mean that each genotype is a concatenation of three (flattened) weight matrices per each agent in the team? What is the actual genotype size? It sounds huge, I would expect to be at least several hundred weights; but then this would clash with the proposed minuscule population size of 10 (recent deep neuroevolution work from Uber uses populations THREE orders of magnitude larger). Has the population size been proportionated to the genotype dimensionality? Would it be possible to reference the widely adopted defaults of industry standard CMA-ES? Speaking of algorithms, where is the chosen EA implementation discussed? The overview seems to describe a textbook genetic algorithm, but that has been overtaken as state-of-the-art since decades, constituting a poor match for TD3.
Omitting such a chapter severely limits not only the reproducibility of the work but its full understanding. For example, does the EA have sufficient population size to contribute significantly to the process, or is it just performing as a fancy version of Random Weight Guessing? Could you actually quickly run RWG with direct policy search (rather than random action selection) to establish the effective complexity of the task? My final rating after rebuttal will vary wildly depending on the ability to cover such an important piece of information.
A few minor points, because I think that the paper appearance deserves to match the quality of the content:
- The images are consistently too small and hard to read. I understand the need to fit in the page limit by the deadline, but for the camera ready version it will be necessary to trim the text and rescale all images.
- The text is well written but often slowing down the pace for no added value, such as by dedicating a whole page to discussing a series of previously published environments.
- The hyperparameters of the evolutionary algorithm look completely unoptimized. I would expect a definite improvement in performance with minimal tuning.
- The "standard neuroevolutionary algorithm" from 2006 presented as baseline has not been state-of-the-art for over a decade. I would understand its usage as a baseline if that is indeed the underlying evolutionary setup, but otherwise I see no use for such a baseline.
-----------------------------------------------------------------------------------------------
# Update following the rebuttal phase
-----------------------------------------------------------------------------------------------
Thank you for your work and for the extended experimentation. I am confident the quality of the work is overall increased.
The core research question behind my original doubt however remains unaddressed: does the EC part of the algorithm sensibly support the gradient-descent part, or is the algorithm basically behaving as a (noisy) multi-agent TD3?
Such a contribution by itself would be undoubtedly important. Submitting it as a principled unification of EC and DL however would be more than a simple misnomer: it could mislead further research in what is an extremely promising area.
The scientific approach to clarify this point would be to design an experiment showcasing the performance of MARL using a range of sensible population sizes. To understand what "sensible" means in this context, I refer to a classic:
http://www.cmap.polytechnique.fr/~nikolaus.hansen/cec2005ipopcmaes.pdf
A lower bound for the population size with simple / unimodal fitness functions would be $4+floor(3*log(10'000)) = 31$. With such a complex, multimodal fitness though, no contribution from the EA can be expected (based on common practice in the EC field) without at least doubling or tripling that number. The upper bound does not need to be as high as with the recent Uber AI work (10k), but certainly showing the performance with a population of a few hundreds would be the minimum necessary to support your claim. A population size of 10 represents a proper lower bound for a genotype of up to 10 parameters; it is by no means within a reasonable range with your dimensionality of 10'000 parameters, and no researcher with experience in EC would expect anything but noise from such results -- with non-decreasing performance uniquely due to elitism.
The new runs in Appendice C only vary the population size for the ES algorithm, proposed as a baseline. No performance of MARL using a sensible population size is presented.
The fundamental claim is thereby unsustainable by current results. The idea is extremely intriguing and very promising, easily leading to supportive enthusiasm; it is my personal belief however that accepting this work in such a premature stage (and with an incorrect claim) could stunt further research in this direction.
[By the way, the reference Python CMA-ES implementation runs with tens of thousands of parameters and a population size of 60 in a few seconds per generation on a recent laptop: the claim of performance limitations as an excuse for not investigating a core claim suggests that more work would be better invested prior to acceptance.] |
iclr_2020_SygcCnNKwr | State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings. | This paper first introduces a method for quantifying to what extent a dataset split exhibits compound (or, alternatively, atom) divergence, where in particular atoms refer to basic structures used by examples in the datasets, and compounds result from compositional rule application to these atoms. The paper then proposes to evaluate learners on datasets with maximal compound divergence (but minimal atom divergence) between the train and test portions, as a way of testing whether a model exhibits compositional generalization, and suggests a greedy algorithm for forming datasets with this property. In particular, the authors introduce a large automatically generated semantic parsing dataset, which allows for the construction of datasets with these train/test split divergence properties. Finally, the authors evaluate three sequence-to-sequence style semantic parsers on the constructed datasets, and they find that they all generalize very poorly on datasets with maximal compound divergence, and that furthermore the compound divergence appears to be anticorrelated with accuracy.
This is an interesting and ambitious paper tackling an important problem. It is worth noting that the claim that it is the compound divergence that controls the difficulty of generalization (rather than something else, like length) is a substantive one, and the authors do provide evidence of this. At the same time, I think the authors could possibly do more to show that the trend in the plots in Figure 2 can't be explained by something else: for example, the authors could show that the length ratios remain constant as the compound divergence is varied. I think it is also not necessarily clear how easily the notion of differing compound distributions generalizes to other types of tasks.
Presentation-wise, much of the paper is clear and well written, though I think the discussion of weighted frequency distributions of compounds (top of page 3) could be clarified further, and in particular an example subgraph of a rule application DAG should be highlighted here. |
iclr_2020_S1l_ZlrFvS | Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply. We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2. | Overview/Contribution:
====================
The paper proposes an explanation mechanism that pairs the typical saliency map regions together with attributes for similarity matching deep neural networks. The authors tested their methods on two datasets, i.e. Polyvore Outfits (a clothing attributes dataset) and Animals with Attributes 2 (a dataset of attributes for animals).
Overall, the paper has merit to be accepted to the conference with the following strengths and weaknesses. I suggest to the authors to address the weaknesses pointed out to make the paper more stronger, especially adding few more attributes datasets such as person attributes datasets as noted below in the weakness section.
Strength:
========
- The paper is written clearly and is easy to understand. I have seen the additional results and visual comparisons in the supplemental material and it was useful, albeit a bit longer.
- Explanations have the potential to make decisions made by a deep neural model transparent to end users among other benefits especially for sensitive applications such as healthcare and security. Explaining decisions made by similarity matching models has many applications including person attribute recognition and person re-identification for surveillance scenarios [1]. So, in this respect, this paper is relevant to the target audience.
- There is a bit of confusion between explanation and interpretation of decisions made by deep neural network models in the explainable AI literature and in most cases the two are used interchangeably. Hence, saliency maps are considered as explanation on their own by many. Combining saliency map based interpretations together with higher level concepts such as attributes has the potential to generate more realistic explanations of the decisions. The authors made this point at the second paragraph of the introduction.
- Fig. 1 (b) also is a clear example of the kind of explanations generated using a template with the key attribute in question accompanied by the visual saliency map interpretation.
- Fig. 2 clearly shows the overall proposed method and the attribute ranking based on the attributes explanation prior and the match between the saliency map and attribute activation maps.
- The attribute ranking and selection method of informative attributes using combinations weighted TCAV and cosine similarity between the attribute activation map and the generated saliency map is novel.
Weakness:
===========
- Applications of such a combined explanatory system don’t seem to be highly motivated in the introduction. I suggest the authors discuss more of the image similarity based applications and less on the discussion and heavy citation of generalized deep neural networks.
- The forms of the two loss components are both variants of l_{1} and l_{2} standard losses and they could be subject to issues with the standard variants of the l_{1} and l_{2} losses such as lack of translation and other transformation invariances. Hence, it would have been more useful to give the reasoning for the selection of the losses employed compared to other similarity and divergence based losses that are less sensitive to such variations.
- Similar to the above point, the choice of cosine similarity to compare match b/n attribute activation maps and saliency maps seem arbitrary. The method is described well but why cosine similarity was chosen in terms of its benefits compared to other similarity metrics is not that clear.
- Evaluation on more datasets such as person/pedestrian attributes datasets would have demonstrated the generalizability of the proposed method across multiple practical domains. As such, I would suggest the authors test their method on at least one person/pedestrian attributes dataset such as PETA, Market1501, etc.
- Although Fig. 1 (b) motivated a more practical high level explanation, in the results section, the attribute explanations are reduced to just the selected attribute that matched with the saliency well. Human-like concise attribute-based high level explanation just like the example given in Fig. 1 (b) would have made the paper stronger. Even if NLP is beyond the scope of this paper, a simple template based explanation that incorporated the selected/matched attribute would have been more effective.
- The results are too concise and a few ablation results on different losses etc. could have helped. There is too many qualitative results especially in the supplementary.
1) Bekele, E., Lawson, W. E., Horne, Z., & Khemlani, S. (2018). Implementing a Robust Explanatory Bias in a Person Re-identification Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 2165-2172). |
iclr_2020_r1gRTCVFvB | The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g. by loss re-weighting, data re-sampling, or transfer learning from head-to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability at little cost by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. | The paper tries to handle the class imbalance problem by decoupling the learning process into representation learning and classification, in contrast to the current methods that jointly learn both of them. They comprehensively study several sampling methods for representation learning and different strategies for classification. They find that instance-balanced sampling gives the best representation, and simply adjusting the classifier will equip the model with long-tailed recognition ability. They achieve start of art on long-tailed data (ImageNet-LT, Places-LT and iNaturalist).
In general, this is paper is an interesting paper. The author propose that instance-balanced sampling already learns the best and most generalizable representations, which is out of common expectation. They perform extensive experiment to illustrate their points.
--Writing:
This paper is well written in English and is well structured. And there are two typos. One is in the second row of page 3, "… a more continuous decrease [in in] class labels …" and the other one is in the first paragraph of section 5.4, "… report state-of-art results [on on] three common long-tailed benchmarks …".
--Introduction and review:
The authors do a comprehensive literature review, listing the main directions for solving the long-tailed recognition problem. They emphasis that these methods all jointly learn the representations and classifiers, which "make it unclear how the long-tailed recognition ability is achieved-is it from learning a better representation or by handling the data imbalance better via shifting classifier decision boundaries". This motivate them to decouple representation learning and classification.
--Experiment:
Since this paper decouples the representation learning and classification to "make it clear" whether the long-tailed recognition ability is achieved from better representation or more balanced classifier, I recommend that authors show us some visualization of the feature map besides number on performance. Because I am confused and difficult to image what "better representation" actually looks like.
The authors conduct experiment with ResNeXt-{10,50,101,151}, and mainly use ResNeXt-50 for analysis. Will other networks get similar results as that of ResNeXt-50 shown in Figure 1?
When showing the results, like Figure 1, 2 and Table 2, it would be better to mention the parameters chosen for \tau-normalization and other methods.
Conclusion:
I tend to accept this paper since it is interesting and renews our understanding of the long-tailed recognition ability of neural network and sampling strategies. What's more, he experiment is comprehensive and rigorous. |
iclr_2020_rklj3gBYvH | Meta-learning is an exciting and powerful paradigm that aims to improve the effectiveness of current learning systems. By formulating the learning process as an optimization problem, a model can learn how to learn while requiring significantly less data or experience than traditional approaches. Gradient-based meta-learning methods aims to do just that, however recent work have shown that the effectiveness of these approaches are primarily due to feature reuse and very little has to do with priming the system for rapid learning (learning to make effective weight updates on unseen data distributions). This work introduces Nodal Optimization for Recurrent Meta-Learning (NORML), a novel meta-learning framework where an LSTM-based meta-learner performs neuron-wise optimization on a learner for efficient task learning. Crucially, the number of meta-learner parameters needed in NORML, increases linearly relative to the number of learner parameters. Allowing NORML to potentially scale to learner networks with very large numbers of parameters. While NORML also benefits from feature reuse it is shown experimentally that the meta-learner LSTM learns to make effective weight updates using information from previous data-points and update steps. | This paper proposes a meta-learner that learns how to make parameter updates for a model on a new few-shot learning task. The proposed meta-learner is an LSTM that proposes at each time-step, a point-wise multiplier for the gradient of the hidden units and for the hidden units of the learner neural network, which are then used to compute a gradient update for the hidden-layer weights of the learner network. By not directly producing a learning rate for the gradient, the meta-learner’s parameters are only proportional to the square of the number of hidden units in the network rather than the square of the number of weights of the network. Experiments are performed on few-shot learning benchmarks. The first experiment is on Mini-ImageNet. The authors build upon the method of Sun et al, where they pre-train the network on the meta-training data and then do meta-training where the convolutional network weights are frozen and only the fully-connected layer is updated on few-shot learning tasks using their meta-learner LSTM. The other experiment is on Omniglot 20-way classification, where they consider a network with only full-connected layers and show that their meta-learner LSTM performs better than MAML.
The closest previous work to this paper is by Ravi & Larochelle, who also propose a meta-learner LSTM. The submission states about this work that “A challenge of this approach is that if you want to optimize tens of thousands of parameters, you would have a massive hidden state and input size, and will therefore require an enormous number of parameters for the meta-learner…In Andrychowicz at al. an alternative approach is introduced that avoids the aforementioned scaling problem by individually optimizing each parameter using a separate LSTM…” I don’t believe this is true.
As stated in the work by Ravi & Larochelle, “Because we want our meta-learner to produce updates for deep neural networks, which consist of tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need to employ some sort of parameter sharing. Thus as in Andrychowicz et al. (2016), we share parameters across the coordinates of the learner gradient. This means each coordinate has its own hidden and cell state values but the LSTM parameters are the same across all coordinates.” Thus, Ravi & Larochelle employ something similar to Andrychowicz at al., meaning that the number of parameters in the LSTM meta-learner there is actually a constant relative to the size of the learner network.
Thus, the paper’s contribution relative to Ravi & Larochelle is to propose a LSTM meta-learner with more parameters relative to the learner model. Firstly, I think this comparison should be stated and explained clearly in the paper. Additionally, in order to prove the benefit of such an approach, I think a comparison to the work of Ravi & Larochelle with the exact experimental settings used in the submission (pre-training the convolutional network and only using the meta-learner LSTM to tune the last fully-connected layer) would be helpful in order to validate the usefulness of the extra set of parameters in their proposed meta-learner LSTM.
The submission also states that “In many cases it is preferred to have an inner loop that consists of multiple sequential updates. However the inner loop’s computational graph can become quite large if too many steps are taken. This often results in exploding and vanishing gradients since the outer loop still needs to differentiate through the entire inner loop (Aravind Rajeswaran (2019), Antoniou et al. (2018)). This limits MAML to domains where a small amount of update steps are sufficient for learning. The LSTM-based meta-learner proposed in this work, allow gradients to effectively flow through a large number of update steps. NORML can therefore be applied to a wide array of domains.” I think this statement should be validated if it is stated. Can it be shown that when making a lot of inner-loop updates, the LSTM meta-learner performs better than MAML because of overcoming the stated issues with differentiation through a long inner loop? The experiments done in the paper involve very few inner loop steps and so I don’t believe the claim is supported.
Lastly, the experimental results are not very convincing. Though the authors say they modify the method proposed in Sun et al, the results from Sun et al. are not shown in the paper. Sun et al actually seem to achieve better results than the submission with the same 1-shot and better 5-shot accuracy. Was there a reason these results are not shown in the submission? Additionally, for the Omniglot experiment, is there any reason why it was not performed with the typical convolutional network architecture? Since the original MAML results on 20-way Omniglot are with a convolutional network, using the convolutional network would make the results more meaningful relative to previous work and show that the method is more broadly applicable to convolutional networks.
I believe there are several issues with the paper as stated above. Because of these issues, it is hard to evaluate whether the idea proposed is of significant benefit.
References
Andrychowicz et al. Learning to learn by gradient descent by gradient descent. NIPS 2016.
Ravi & Larochelle. Optimization as a Model for Few-Shot Learning. ICLR 2017.
Sun et al. Meta-transfer learning for few-shot learning. |
iclr_2020_SJeX2aVFwH | Given a set of distances amongst points, determining what metric representation is most "consistent" with the input distances or the metric that best captures the relevant geometric features of the data is a key step in many machine learning algorithms. In this paper, we focus on metric constrained problems, a class of optimization problems with metric constraints. In particular, we identify three types of metric constrained problems: metric nearness (Brickell et al. (2008)), weighted correlation clustering on general graphs (Bansal et al. (2004)), and metric learning (Bellet et al. (2013); Davis et al. (2007)). Because of the large number of constraints in these problems, however, these and other researchers have been forced to restrict either the kinds of metrics learned or the size of the problem that can be solved. We provide an algorithm, PROJECT AND FORGET, that uses Bregman projections with cutting planes, to solve metric constrained problems with many (possibly exponentially) inequality constraints. We also prove that our algorithm converges to the global optimal solution. Additionally, we show that the optimality error (L 2 distance of the current iterate to the optimal) asymptotically decays at an exponential rate. We show that using our method we can solve large problem instances of three types of metric constrained problems, out-performing all state of the art methods with respect to CPU times and problem sizes. | The paper presents an algorithm for optimizing an function f under the constraints that the square matrix variable x represents "metric". In this context, this means that we have also observed a graph G with n vertices, and x is of size n by n, x(i, j) < x(i, e) + x(e, j) if i ~ e and j ~ e are adjacent: this is a generalized for of triangle inequality.
Authors argue that the constraint "x is a metric" translate into exponentially many linear constraints, which results in to a hard to solve problem
The algorithm they propose to tackle this (Algorithm 1) has two subroutines that are shown in Algorithm 2 (Forget and Project). The Project subroutine itself is a projection onto a convex set according to a Bregman divergence, which is not trivial. In this paper I understand that authors only consider metrics of type x' L x where L = C'C >0 is psd
Authors claim that the sequence created by their algorithm asymptotically converges to the global optimum, and show numerical superiority to baselines.
Major remarks:
My general feeling is that the paper overstates its results. The paper has some good contribution, which could be better emphasized.
The algorithm stacks multiple subroutines which are not necessarily very light. I am skeptical about the numerical efficiency of such algorithms.
Theoretical results are stated asymptotically while interpreted in the text as finite steps results: page 5, after Corollary 1., read "The algorithm spends the first few iterations ..." in this case, a theoretical result should support the claim
The algorithm starts at a stationary point of f. This itself can be nontrivial. Can authors discuss this?
Minor remarks:
metric and distance to me mean the same, hence the first sentence of the intro doesn't read easily..
what is \cal A line 5 of Algorithm 1? It seems to be a "list of hyperplanes" according to the previous text, but it is unclear to me how to build it algorithmically
The notation L is confusing in Algo 1 MetricViolation: wasn't L the matrix defining the metric?
A few typos: l. 12 Algo 1, e = (i, j), 3.2 "global optimum [remove solution]." |
iclr_2020_BJxH22EKPS | Neural architecture search (NAS) searches architectures automatically for given tasks, e.g., image classification and language modeling. Improving the search efficiency and effectiveness have attracted increasing attention in recent years. However, few efforts have been devoted to understanding the generated architectures. In this paper, we first reveal that existing NAS algorithms (e.g., DARTS, ENAS) tend to favor architectures with wide and shallow cell structures. These favorable architectures consistently achieve fast convergence and are consequently selected by NAS algorithms. Our empirical and theoretical study further confirms that their fast convergence derives from their smooth loss landscape and accurate gradient information. Nonetheless, these architectures may not necessarily lead to better generalization performance compared with other candidate architectures in the same search space, and therefore further improvement is possible by revising existing NAS algorithms. | --- Update after author's response ---
Thank authors for providing the detailed response to all my concerns. I am revising my rating to weak accept.
Specifically, generalising with 52 different variants of depth-width settings are enough, and the updated version is more solid than the earlier one. I think this paper should be accepted.
Summary:
The paper observed one common pattern of searched cell by 5 NAS algorithms, which is the cell found usually has large width but small depth structure, and claims the reason of such pattern is because architectures with shallow but wide structure converge fast during training, and thus are sampled by the NAS policy. To justify this fast convergence claim, the paper proposed to 1) define a width-depth level (width based on feature maps dimension, and depth based on its DAG connection) for each cell in the search space, and randomly sampled one architecture at each level on top of some best-cell discovered by NAS algorithms, 2) training them on original task from scratch independently, and provide visualization of training curve under various learning rate settings, loss landscape as well as gradient variance plot. For theoretical analysis, the paper formulates the narrowest and widest cells, and showing the difference of gradient is bounded by its Lipschitz smoothness of parameter matrices, and usually such variance indicates the widest architecture could converge faster than the narrowest one.
Whilst this observation is interesting, I found the empirical and theoretical justifications seem to be insufficient to support the claims for the following reasons, 1) The experiments are overwhelmingly built on top of **1** architecture (even obtained from random sampling) of each width-depth level, and it may not well represent the common behavior in the search space, thus the generalization of these claims remains questionable; 2) Theoretical analysis showing the gradient variance difference of narrowest architecture is bounded comparing to the widest cell, however, in practice, these architectures are not properly evaluated. Without proper extension, it is hard to conclude that such difference bound between a wider and narrower architecture pair; 3) experiments in supplementary negatively affects the generalizability of this paper, since the observed trend on DARTS search space does not agree with the one on AmoebaNet and SNAS cases; 4) paper claims the NAS algorithm tends to pick the fast converging architectures more than those late converged one, while intuitive, without showing some detailed process about how NAS algorithms converge, and which architectures they actually sampled during the search phase.
Nevertheless, I do agree that this paper has a clear motivation, and the observation is interesting and important. If the author could show the conclusion still hold after scaling the experiments, I am learning to accept this paper in the end.
Strength
+ Observation of these NAS algorithms tends to pick wide but shallow cell type is interesting, motivation of this work is well justified.
+ Experiments are throughout, instructive under the paper's current setting.
+ Theoretical justification for the gradient variance between the narrowest and widest cells are sound.
+ Paper is well written and easy to follow, the figures are presented clearly.
Weakness
- Insufficient experimental setting to support the claim.
My major concern is that under the current experiment design, it is not clear if the observation is well justified, and impedes the main contribution. As mentioned in the review's summary, it is not that convincing that, for each width-depth level, one architecture is enough. I understand exploiting all the variants is resource consuming, however, without such experiments, the current experiments can be impacted by many factors, such as, 1) as in Appendix A2, for each level, the paper "fixed the partial order of their intermediate nodes" and "replace the source node of their associated operations by uniformly randomly sampling a node from their proceeding nodes in the same cell", if I understand correctly, this means the operations will remain the same. However, since these best architectures are searched over both operation and topology connection, new architectures generated from this way may be sub-optimal, hence the larger gradients or slow convergence is not only because they are "narrower" and "deeper". Without proper isolation, it is not possible to conclude as in paper.
To this end, I suggest author provide the following experiments, 1) sampling all the variants at each level, within a search space like NASBench-101[1], where all the architecture performances are known, 2) at least sampling a sufficient number in the current space (probably > 30 to be statistically significant); 3) Random sampling a small topological variances, then run NAS search algorithms to search the best operation set, then redo the experiments in the paper.
- Trending of DARTS evaluation results does not agree with SNAS and AmoebaNet.
In Figure 5 (loss landscape plot) and Figure 6 (gradient variance heatmap) for DARTS, the wider-shallower architectures are better comparing to narrower-deeper ones, however, this trend is not significant in Figure 14,16 for AmoebaNet space and Figure 15, 17 for SNAS space. After a closer look, I noticed in Figure 2, the Darts C3 and C4, input node x0 is not connected to the graph, while C, C1, C2, and all the topologies in Figure 10, 11, x0 is connected. Could the author(s) comment on this? Will this be the reason why the C3, C4 DARTS are worse than other architectures?
Minor comments
Page 1 - Introduction paragraph 3 line 1 - 'typologies': is this referring to 'topologies'?
Table 1 - Adapted architectures on CIFAR-10 are mostly worse, even on CIFAR-100, they are better in a small margin. This does not support the claim well.
Figure 9 - Could the author provide the width-depth information for each index?
Reference
[1] Ying et al. NAS-Bench-101: Towards Reproducible Neural Architecture Search, ICML'19. |
iclr_2020_S1gTAp4FDB | Symbolic regression is a type of discrete optimization problem that involves searching expressions that fit given data points. In many cases, other mathematical constraints about the unknown expression not only provide more information beyond just values at some inputs, but also effectively constrain the search space. We identify the asymptotic constraints of leading polynomial powers as the function approaches 0 and ∞ as useful constraints and create a system to use them for symbolic regression. The first part of the system is a conditional production rule generating neural network which preferentially generates production rules to construct expressions with the desired leading powers, producing novel expressions outside the training domain. The second part, which we call Neural-Guided Monte Carlo Tree Search, uses the network during a search to find an expression that conforms to a set of data points and desired leading powers. Lastly, we provide an extensive experimental validation on thousands of target expressions showing the efficacy of our system compared to exiting methods for finding unknown functions outside of the training set. | This paper considers a task of symbolic regression (SR) when additional information called 'asymptotic constraints' on the target expression is given. For example, when the groundtruth expression is 3 x^2 + 5 x, it behaves like x^2 in the limit x -> infinity, thus the power '2' is given as additional information 'asymptotic constraint'. In the paper's setting, for SR with univariate groundtruth functions f(x), 'asymptotic constraints' for x-> infinity and x -> 0 are given. For this situation, the paper proposes a method called NG-MCTS with an RNN-based generator and MCTS guided by it to consider asymptotic constraints. In SR, a learner is asked to acquire an explicit symbolic expression \hat{f}(x) from a given set of datapoints {x_i, f(x_i)}, but unlike the parameter optimization aspect of a standard supervised ML setting, the problem is essentially combinatorial optimization over exponentially large space of symbolic expressions of a given context-free grammar (CFG). For a given symbolic space with a prespecified CFG, extensive experimental evaluations are performed and demonstrated significant performance gain over existing alternatives based on EA. Also, quantitative evaluations about extrapolative performance and detailed evaluation of the RNN generator are also reported.
Though it includes a lot of quite interesting methods and results, the paper has two major issues:
(1) the proposed method NG-MCTS explicitly uses the fact that the target expressions are generated from a CFG, but this assumption sounds unnatural if the target problem is SR (unlike the case of GVAE). Apparently, all results except a toy case study in 5.2 depend on artificial datasets from a CFG, and any practical advantages or impacts are still unclear because the experimental settings are very artificial.
(2) the most important claim of this paper would be the proposal to use 'asymptotic constraints', but the availability of this information sounds too strong and again artificial in practical situations. A one-line motivation saying that 'this property is known a priori for some physical systems before the detailed law is derived' is not convincing enough. |
iclr_2020_SJxIkkSKwB | We study the problem of training machine learning models incrementally with batches of samples annotated with noisy oracles. We select each batch of samples that are important and also diverse via clustering and importance sampling. In particular, we incorporate model uncertainty into the sampling probability to compensate poor estimation of the importance scores when the training data is too small to build a meaningful model. Experiments on benchmark image classification datasets (MNIST, SVHN, and CIFAR10) shows improvement over existing active learning strategies. We introduce an extra denoising layer to deep networks to make active learning robust to label noises and show significant improvements. | Summary: The paper proposes an uncertainty-based method for batch-mode active learning with/without noisy oracles which uses importance sampling scores of clusters as the querying strategy. Authors evaluate their method on MNIST, CIFAR10, and SVHN against approaches such as Core-set, BALD, entropy, and random sampling and show superior performance.
Pros:
(+): The paper is well-written and well-motivated.
(+): The problem is timely and has direct real world applications.
(+): Applying the denoising layer is an interesting and viable idea to overcome noise effects.
Cons that significantly affected my score and resulted in rejecting the paper are as follows:
1 - Experimental setting and evaluations:
The biggest drawback in this paper is the experimental setting which is not rigorous enough for the following reasons:
(a) Weak datasets: Authors have chosen some standard benchmarks but they do not seem to be convincing as the datasets are too easy. Based on my experience, the behavior of an active learning agent trained on small number of classes does not necessarily generalize to cases where the number of classes is large. So I’d like to ask authors to try to evaluate on datasets with more number of classes as well as more realistic images (as opposed to thumbnail images).
(b) Comparison to state of the art: More importantly, authors are missing out on an important baseline which is a recent ICCV paper [1] on task-agnostic pool-based batch-mode active learning that has explored both noisy and perfect oracles and to the best of my knowledge is the current state of the art. Authors can extend their experimental setting to the datasets used in [1] including CIFAR100 and ImageNet and provide comparison. The reason that it is important to compare is that the method in [1] is task-agnostic and does not explicitly use uncertainty hence it is interesting to see how this method performs against it.
(c) More on baselines and related work: In addition to [1], different variation of ensemble methods have been serving as active baselines in this field and I recommend adding one as a baseline. For a recent work in this line you can see this paper from CVPR 2018 [2]. Moreover, the authors seem to be missing on a long-standing line of active learning research known as Query-by-Committee (QBC) began in 1997 [3] in the related work section which should be cited as well.
(d) Hyper parameter tuning: Last but not least about the experiments is the hyper parameter tuning which is not addressed. It is important to not use the well-known hyper parameters for these benchmarks that have been obtained using validation set from the entire dataset. Authors should explain how they have performed this.
2 - Report sampling time:
Another important factor missing in the evaluations is reporting time complexity or wall-clock time that it takes to query samples. Authors should measure this precisely and make sure it is being reported similarly across all the methods. I am asking this because random selection is still an effective baseline in the field and it only takes a few milliseconds. Therefore, the sampling time of a new algorithm should be gauged based on that while performing better than random. Given the multiple steps in this algorithm I am skeptical that the sampling time would be proportional to the gain obtained in accuracy versus labeling ratio over random selection baseline.
3: Section 5.2 is not informative:
(a) My last major concern is section 5.2 where the discussion on results is given along with supporting figures.
Lack of quantitative results: First of all, no quantitative results are given for the values plotted in figure 3 and 4 (neither in the main text nor in the supplement) and different methods happen to be too close to each other, making it hard to see the right color for standard deviations. Also, in the discussion corresponding to those figures no information is provided in this regard. It is important to report how much labeling effort this algorithm is saving by comparing number of samples needed by each method to achieve the same accuracy because that is the main goal in AL. Lack of numbers also makes it hard for this work to be used by others.
(b) Figure legends: The way authors have labeled their method in Figure 3 is confusing as the “Proposed+noise” happens to achieve better performance over “Proposed”. I think by “noise” authors meant denoising layer was being used (please correct me if I am wrong) but this is not what the legends imply.
(c) X axis label: It is common to report accuracy versus percentage of labeled data making it more understandable of how far each experiment has been through each dataset. Additionally, I recommend reporting the maximum achievable accuracy for each dataset assuming that all the data was labeled. This serves as an upper bound.
(d) Font sizes in figures: It will be helpful to make them larger.
4. I also have a more general concern about uncertainty-based methods. I know that they have been around for a long time but given the fact that predictive uncertainty is still an open problem and there is still no concrete method to measure calibrated confidence scores for outputs of a deep network (Dropout and BN given in this paper have been already outperformed by ensembles (see [4])), hence relying on uncertainty is not the best direction to go. It is literally chicken and egg problem to try to rely on confidence scores of the main-stream task while it is being trained itself. This issue has been raised in this paper but I am still not convinced that the paper has fully addressed it. I think the community needs to explore task-agnostic methods more deeply. [1] is a good start on this path but there is always more to do. This concern is not necessarily a major part of my decision assessment and I only want the authors to state their opinion on this and explain how accurately they think this issue is being addressed.
The following issues are less major and are given only to help, and not part of my decision assessment:
1- In Figure 3(c), it appears that the accuracy for “Proposed + noise” when \epsilon=0.1 is higher than when it is noise-free. It might be a miss-reading as the figure is coarse and it is hard to compare but if that is the case, can authors explain it?
2- The Abstract does not read well and does not state the main contribution. It has put too emphasize on batch-mode active learning which has become an intuitive approach since deep networks have become popular. Also the wording “Our approach bridges between uniform randomness and score based importance sampling of clusters” should be changed as all other active learning algorithms are trying to do that.
3 - In section 5.1 please state that you used VGG 16 (I assume so since it is what was used in the cited reference (Gal et al. 2017) but authors need to verify that. Also, the other citation given for this (Fchollet, 2015) is confusing as it is Keras package documentation while in the next sentence authors state that they have implemented their algorithm in PyTorch. So please shed some light on this.
*******************************************************************
As a final note, I would be willing to raise my score if authors make the experimental setting stronger (see suggestions above).
[1] Sinha, Samarth, Sayna Ebrahimi, and Trevor Darrell. "Variational Adversarial Active Learning." arXiv preprint arXiv:1904.00370 (2019).
[2] Beluch, William H., et al. "The power of ensembles for active learning in image classification." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
[3] Freund, Yoav, et al. "Selective sampling using the query by committee algorithm." Machine learning 28.2-3 (1997): 133-168.
[4] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. "Simple and scalable predictive uncertainty estimation using deep ensembles." Advances in Neural Information Processing Systems. 2017.
*******************************************************************
*******************************************************************
*******************************************************************
POST-REBUTTAL:
In the revised version, there are new tables (Table 1-4) provided in the appendix which I found too different than results reported for previous baselines by more than 6%. For example, according to Core-set paper (Sener, 2018), Figure 4, they achieve near 80% using 40% of the data (20000 samples), and according to VAAL paper (Sinha et al. 2019 github page: https://github.com/sinhasam/vaal/blob/master/plots/plots.ipynb), they achieve 80.90+-0.2. However, the current paper reports 71.99 ± 0.55 for Core-set, and 74.06 ± 0.47 for VAAL which is a large mismatch.
More importantly, looking at the results provided in VAAL paper (Sinha et al. 2019 or Core-set paper (Sener, 2018) they show their performance as well as most of their baselines is superior to random selection by a large gap, but in this paper results shown in Table 1 to 4 in almost all of them random is superior (or on-par) to all baselines and the proposed method is the only method that outperforms baseline which is clearly a wrong claim. Therefore, I decrease my score from weak reject to reject. |
iclr_2020_H1eCw3EKvH | Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve. | This paper first theoretically demonstrates that a commonly used reinforcement learning method for neural sequence-to-sequence models (e.g. in NMT), contrastive minimum risk training (CMRT), is not guaranteed to converge to local (let alone global) optima of the reward function. The paper then empirically demonstrates that the REINFORCE algorithm, while not subject to the same theoretical flaws as CMRT, in practice fails to improve NMT models unless the baseline model is already "nearly correct" (i.e. the correct tokens were already within the few most probable tokens before the fine-tuning steps with REINFORCE). In fact, some of the performance gains of using REINFORCE/CMRT can be attributed to making the model's output probability distribution more peaked, and not necessarily from making the target tokens more probable as commonly assumed.
Overall, this is an excellent paper that offers significant contributions for the field. I have summarised the key strengths of the paper below, along with several suggestions and questions that I hope will be addressed by the authors. Based on my assessment, I am recommending a rating of "Accept" for this paper.
Strengths:
1. The paper is very well-written and well-structured. It starts off by pointing out the theoretical limitations of CMRT (and concisely recaps the key differences between CMRT and REINFORCE), and continues with an extensive set of experiments that clearly illustrates the limitations of REINFORCE in practice.
2. I also like the use of both controlled simulations (including one where the reward is constant) and NMT experiments with real data. The controlled simulations are useful to abstract away from the full complexity of the model and investigate what happens under various control scenarios, while the NMT experiments demonstrate that the findings still hold under the realistic setup.
3. The findings are really interesting and clearly illustrate the limitations of existing REINFORCE/CMRT methods for neural sequence-to-sequence models. It is very interesting to see that REINFORCE fails to make the target token most probable when the initial model ranks the target token as the third or more probable tokens under the model (Figure 2), even under the simple controlled simulations, which highlights the prohibitively high sample complexity of the model.
4. The peakiness effect hypothesis (i.e. attributing the gains of REINFORCE to making the output distribution more peaked, and not necessarily by making the target tokens more probable) is well-supported by the paper's empirical evidence. It is really illuminating that using a constant reward of 1 leads to the same BLEU score as actually optimising for BLEU in NMT (Section 5.2).
Suggestions and questions:
1. Section 4.2 (NMT Experiments) indicates that REINFORCE fine-tuning is done for 10 epochs, with 5,000 sentences per epoch, and k=1. Considering the enormous discrete sample space, one could expect that using multi-sample REINFORCE (i.e. k > 1) and training the model for many more epochs might mitigate the identified problems to some extent, and thus change the findings. Training for 5,000 sentences * 10 epochs may just not be enough for the RL fine-tuning to make a big difference.
2. In Figure 1, the x-axis is the "Update Size" with a scale between -1.0 and 1.0. This "Update Size" variable is not really explained in the paper, and why the scale is between -1.0 and 1.0.
3. In my understanding, the controlled simulations (Section 4.1) is done at the word-level (including word-level rewards, as opposed to the NMT experiments which are done at the sequence-level with sentence-level rewards). If this is the case, this should be made clearer.
4. To make Figure 3 easier to understand, the caption should indicate that a lower cumulative percentage means a more peaked output distribution.
5. Rather than breaking down the analysis by where the target token is ranked by the initial, pre-RL model (e.g. the target token is ranked second/third best in Figures 2 and 5), perhaps what really matters is the probability assigned to the target token. For instance, even if the target token is ranked third best by the initial model, there will be a big difference whether it is assigned a probability of 0.1 or 0.01 (i.e. the latter case is much less likely to be sampled, which would exacerbate the problem). Including this analysis might help strengthen the paper further. |
iclr_2020_ryeFY0EFwS | An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting. | The paper studies the link between alignment of the gradients computed on different examples, and generalization of deep neural networks. The paper tackles an important research question, is very clearly written, and proposes an insightful metric. In particular, through the lenses of the metric it is possible to understand better the learning dynamics on random labels. However, the submission seems to have limited novelty, based on which I am leaning towards rejecting the paper.
Detailed comments
1. The prior and concurrent work is not discussed sufficiently:
a) The novelty of the "Coherent Gradients hypothesis" is not clear to me. First, the empirical fact that some examples are easier to learn than others in training of deep networks was the key focus of [5].
Hence, "Coherent Graident Hypothesis" should be mostly considered an explanation for why simple examples are/simple function are learned first. "Coherent Gradient Hypothesis" proposes that the key mechanism behind this phenomena is that simple examples/functions have co-aligned gradients and hence a larger "effective" learning rate. However, there are already quite convincing and closely related hypotheses. For example, the spectral bias interpretation of deep networks [2] and (2) suggests the same view actually. Just expressed in a different formalism, but can be also casted as having a higher effective learning rate for the strongest modes. Similarly, [3] proposes that SGD learns functions of increased complexity. A detailed comparison between these hypotheses is needed.
b) "Gradient coherence" metric is very closely related to Stiffness studied in [1] (01.2019 on arXiv). [1] studies the cosine (or sign) between gradients coming from different examples, and reach quite similar conclusions. It is also worth noting that [6, 7] propose and study a very similar metric as well. While arXiv submissions is not consider prior work, these three preprints should be discussed in detail in the submission.
c) It should be also remarked that "Coherent Gradient hypothesis" is to some extend folk knowledge. In particular, it is quite well known and also brought to the attention of the deep learning community that in linear regression strongest modes of the datasets as learned first when training using GD (see for instance [4]), which causally speaking stems directly from gradient coherence; these modes correspond to the largest eigenvalues of the (constant) Hessian. To make it more precise: consider that GD solving linear regression can be seen as having higher "effective" learning rates along the strongest modes in the dataset.
2. Experiments on random labels and restricting gradient norms are interesting. However, [5] should be cited. They experimented with regularization impact on memorization, which due to the addition of noise, probably also supresses weak gradients.
3. Experiments on MNIST do not feel adequate. While I do not doubt the validity of the experimental results, the paper should include results on another dataset; ideally from other domain than vision.
4. Plots in Figure 4 are too small to read. I would recommend moving half of them to the Supplement?
5. "Understanding why solutions of the optimization problem on the training sample carry over to the population at large" - Not sure what do you mean here. Could you please clarify?
6. "Furthermore, while SGD is critical for computational speed, from our experiments and others (Keskar et al., 2016; Wu et al., 2017; Zhang et al., 2017) it appears not to be necessary.". Please note there is very little work on training with GD large models. Also, citing in this context Keskar is misleading. Wasn't the whole point of Keskar to show why large batch size training overfits? Finally, there are many papers on studying the role of learning rate and batch size in generalization (not computational speed). I think this sentence should be rewritten to clarify what is the experimental data that GD is "sufficient", and SGD is just needed for "computational speed".
References
[1] Stanislav Fort et al, Stiffness: A New Perspective on Generalization in Neural Networks, https://arxiv.org/abs/1901.09491
[2] Rahaman et al, On the Spectral Bias of Neural Networks, https://arxiv.org/abs/1806.08734
[3] Nakkiran et al, SGD on Neural Networks Learns Functions of Increasing Complexity, https://arxiv.org/abs/1905.11604
[4] Goh, Why Momentum Really Works, https://distill.pub/2017/momentum/
[5] Arpit et al, A Closer Look at Memorization in Deep Networks, https://arxiv.org/abs/1706.05394
[6] He and Su, The Local Elasticity of Neural Networks, https://arxiv.org/abs/1910.06943
[7] Sankararaman, The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent, https://arxiv.org/abs/1904.06963 |
iclr_2020_B1xIj3VYvr | A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model. | This paper proposes a new type of weakly supervised clustering / multiple instance learning (MIL) problem in which bags of instances (data points) are labeled with a "unique class count (UCC)*, rather than any bag-level or instance-level labels. For example, a histopathology slide (the bag), consisting of many individual pixels to be labeled (the instances) could be labeled at the bag level only with UCC = 1 (for only healthy or only metastatic) or UCC = 2 (for mixed / border case). The paper then proposes an approach for clustering instances based on the following two-step approach: (1) a UCC model is trained to predict the UCC given an input bag, and (2) the features of this learned UCC model are used in an unsupervised clustering algorithm to the get the instance-level clusters / labels. The paper also provides a theoretical argument for why this approach is feasible.
Overall this paper proposes a (1) novel and creative approach that is well validated both (2) theoretically and (3) empirically, and relevant to a real-world problem, and thus I believe should be accepted. In slightly more detail:
- (1) MIL where we are given bag-level labels only is a well-studied problem that occurs in many real world settings, such as the histopathology one used in this paper. However to this reader's knowledge, this is a new variant that is both creative and motivated by an actual real-world study, which is exciting and alone warrants presentation at the conference in my opinion.
- (2) The theoretical treatment is high-level, but still serves a clear purpose of establishing feasibility of the proposed method- this modest and appropriate purpose serves the paper well.
- (3) The empirical results are thorough---e.g. the use of the two loss components for the UCC model are appropriately ablated, there are a range of baseline approaches compared to, multiple evaluation points are provided (i.e. both UCC prediction and final clustering metrics), and a real world use case is presented--and the results are impressive.
Some comments to improve the paper:
- The connection of the proposed UCC approach to the motivating histopath example should be explicitly stated upfront to help the reader understand how this method could be used and how it makes sense!
- It seems that the KDE element of the UCC model was chosen in part to enable the theoretical analysis? If so, this should be clearly stated to help the reader understand the design rationale.
- Either way, it seems that a different approach than the KDE layer could have been taken- this should be added to the ablation experiments |
iclr_2020_Hkl6i0EFPH | We present a novel approach G-PATE for training a scalable differentially private data generator, which can be used to produce synthetic datasets with strong privacy guarantee while preserving high data utility. Our approach leverages generative adversarial nets to generate data and exploits the PATE (Private Aggregation of Teacher Ensembles) framework to protect data privacy. Compared to existing methods, our approach significantly improves the use of privacy budget. This is possible since we only need to ensure differential privacy for the generator, which is the part of the model that actually needs to be published for private data generation. In particular, we connect a student generator with an ensemble of teacher discriminators and propose a private gradient aggregation mechanism to ensure differential privacy on all the information that flows from the teacher discriminators to the student generator. Theoretically, we prove that our algorithm ensures differential privacy for the generator. Empirically, we provide thorough experiments to demonstrate the superiority of our method over prior work on both image and non-image datasets. | This paper studies the problem of differentially private data generator. Inspired by the general GAN framework and the PATE mechanism, the authors propose a new differentially private training algorithm for data generator. The problem of training data generator with privacy guarantee considered in this paper is very interesting, and the proposed algorithm looks novel. However, there are lots of unclear statements in the current paper, and I cannot tell whether the proposed algorithm is indeed better than previous methods. Following are my major concerns:
1.It is unclear what are the loss functions used in equation (1) and (2). Please define k when introducing equation (2).
2.The training framework introduced in section 3.1 is different from the traditional GAN framework, and thus my concern is that whether this framework will give us good generated samples. Because the performance of GAN has been proved in both theory and practice. The authors should at least empirically show the performance of the proposed framework in the nonprivate setting.
3.There is no introduction of the $(\epsilon,\delta)$-differential privacy before introducing the Definition 1.
4.There is no definition of Renyi differential privacy, so the statement of Theorem 2 is unclear. In addition, what is data-dependent Renyi differential privacy?
5.The privacy guarantee of Algorithm 2 is not very clear. Because there are lots of parameters in Algorithm 1 and 2 which may affect the privacy guarantee, and Theorem 3 does not state such requirements. For example, how to choose $\sigma_1,\sigma_2$? In Theorem 7, there are some constraints on different parameters, will them be satisfied by your algorithm?
6.How will the number of teacher models affect the privacy guarantee?
7.Why you choose random projection matrix with variance $1/k$, and what is the projection dimension for different algorithms?
8.In Table 1, the results of non private GAN are different from the results of non private GAN reported in PATE-GAN paper. Since the baseline results are much better in the current than the results reported in the PATE-GAN paper, it seems to me that the improvements of the proposed method comes from the stronger baseline.
Other comments:
1.$\lambda>1$ in Theorem 3.
2.Algorithm 2 should be moved to main context.
3.The last sentence in section 3.2 is not convincing.
4.Typo “differnet” in the caption of Table 1. |
iclr_2020_SJxNzgSKvH | We present a selective sampling method designed to accelerate the training of deep neural networks. To this end, we introduce a novel measurement, the minimal margin score (MMS), which measures the minimal amount of displacement an input should take until its predicted classification is switched. For multi-class linear classification, the MMS measure is a natural generalization of the marginbased selection criterion, which was thoroughly studied in the binary classification setting. In addition, the MMS measure provides an interesting insight into the progress of the training process and can be useful for designing and monitoring new training regimes. Empirically we demonstrate a substantial acceleration when training commonly used deep neural network architectures for popular image classification tasks. The efficiency of our method is compared against the standard training procedures, and against commonly used selective sampling alternatives: Hard negative mining selection, and Entropy-based selection. Finally, we demonstrate an additional speedup when we adopt a more aggressive learningdrop regime while using the MMS selective sampling method. | ### Summary of contributions
This paper aims to accelerate the training of deep networks using a selective sampling.
They adapt ideas from active learning (which use some form of uncertainty estimation about the class of the label) to selectively choose samples on which to perform the backward pass. Specifically, they use the minimal margin score (MMS).
Their algorithm works by computing the forward pass over a batch of size B (which is much larger than the regular batch of size b), compute the uncertainty measure for each sample, and only perform the backward pass over the b samples with the highest uncertainty. The motivation is that the backward pass is more expensive than the forward pass, and that by only performing this pass on a subset of samples, computations are saved.
### Recommendation
Reject. The central premise of the paper is unclear, the writing/presentation needs improvement, and the experiments are not convincing.
### Detailed comments/improvements:
There is a central premise of the paper that I don't understand: that the forward pass is much cheaper than the backward pass.
This is claimed in the intro by referring to charts that hardware manufacturers publish (but there are no references included), but I don't see why this should be the case.
For a linear network with weights W, the forward pass is given by the matrix-matrix product (rows of X are minibatch samples):
Y = XW^T
and the backward pass is given by the two matrix-matrix products:
dL/dX = dL/dY*dY/dX = dL/dY*W
dL/dW = dL/dY*dY/dW = dL/dY*X^T
Similarly the two operations in the backward pass for convolutional layers are given by a convolution of the output gradients with the transposed weigtht kernels and the input image respectively.
Point being, I don't see why the backward pass should be more than 3x more expensive than the forward pass. A simple experiment in PyTorch confirms this: the code snippet pasted at the bottom shows that the backward pass takes only around 2.6x longer than the forward pass.
fprop: 0.009286s
bprop: 0.0240s
bprop/fprop: 2.5893x
In algorithm 1, it is assumed that b << B. For this to be effective the forward pass would have to be *much* faster than the backward pass for this method to yield an improvement in computation. Can the authors comment on where this justification comes from?
I am unclear on what the purpose of Section 4.1 is. This shows that the MMS of the proposed method is lower than the other two, but this should be completely expected since that is exactly the quantity being minimized.
There are also several unsubstantiated claims: "Lower MMS scores resemble a better...batch of samples", "the batches selected by our method provide a higher value for the training procedure vs. the HNM samples.", "Evidently, the mean MMS provides a clearer perspective...and usefulness of the selected samples". What does higher value, usefulness, clearer perspective mean?
More generally, it is unclear if there is really any improvement in the final performance from using the proposed method.
In Figure 2, all methods seem to have similar final performance.
In Figure 5, is there a reason why the curve for MMS is cut off? How does its final performance compare to that of the baseline method in red? It looks like the baseline might be better, but it's hard to tell from the figure.
Why are the experiments with the entropy measure in a seperate section? Please include them along with the other methods in the same plot, i.e merge Figure 2 and Figure 4.
My suggestions for improving the experimental section are as follows:
- include all methods together in all the plots/tables
- repeat experiments multiple times with different seeds to get error bars. Include these both in the learning curves and in the tables.
- It's hard to see small differences in the learning curves, so including tables as well is important. Include best performance for all the methods in the tables.
Finally, in 2019 CIFAR alone is not longer a sufficient dataset to report experiments on. Please report results on ImageNet as well.
One of the central premises of the paper is acceleration in terms of compute/time. To make this point, there should also be results in terms of walltime and floating-point operations. Please include these results in the paper.
### Code snippet timing forward/backward passes
import torch, torch.nn as nn, time
model = nn.Sequential(nn.Linear(784, 1000),
nn.ReLU(),
nn.Linear(1000, 1000),
nn.ReLU(),
nn.Linear(1000, 10),
nn.LogSoftmax())
data = torch.randn(128, 784)
labels = torch.ones(128).long()
t = time.time()
pred = model.forward(data)
loss = nn.functional.nll_loss(pred, labels)
fprop_time = time.time() - t
t = time.time()
loss.backward()
bprop_time = time.time() - t
print('fprop: {:.4}s'.format(fprop_time))
print('bprop: {:.4f}s'.format(bprop_time))
print('bprop/fprop: {:.4f}x'.format(bprop_time / fprop_time)) |
iclr_2020_B1e3OlStPB | Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance. DeepSphere, a method based on a graph representation of the sampled sphere, strikes a controllable balance between these two desiderata. This contribution is twofold. First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of vertices and neighbors. Second, we evaluate DeepSphere on relevant problems. Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation. Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay. | The paper presents DeepSphere, a method for learning over spherical data via a graphical representation and graph-convolutions. The primary goal is to develop a method that encodes equivariance to rotations, cheaply. The graph is formed by sampling the surface of the sphere and connecting neighbors according to a distance-based similarity measure. The equivariance of the representation is demonstrated empirically and theoretical background on its convergence properties are shown. DeepSphere is then demonstrated on several problems as well as shown how it applies to non-uniform data.
The paper is interesting and clear. The projection of structured data to graphical representations is both efficient in utilizing existing algorithmic techniques for graph convolutions and useful for approaching the spherical structure of the data. The theoretical analysis and discussion of sampling is interesting, though should be more clearly stated throughout and potentially visualized in figures.
The experiments performed are thorough and interesting. The approach both outperforms baselines in inference time and accuracy. However, one wonders the performance on the well-researched tasks such as the performance on 3D imagery, e.g., Su & Grauman, 2017; Coors et al., 2018.
The unevenly sampled data is a nice extension showing the generality of the approach. How does the approach work for data connected within a radius rather than a k-nearest approach?
Minor:
- A figure detailing the parameters and setup for theorem 3.1 and figure 2 would be useful.
- The statement on the dispersion of the sampling sequence states “the smallest ball in \R^3 containing \sigma_i”, but I believe it should be “containing only \sigma_i”. |
iclr_2020_Skl8EkSFDr | Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters. These large models are compute-and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements. Some model compression methods have been successfully applied on image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods. We then develop a selfsupervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has a compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different pruning granularities. | In this paper, the authors tackle the task of compressing a network. While there
are many effective solutions so far regular computer vision tasks, as they demonstrate,
they fail catastrophically when applied to generative adversarial networks(GANs).
They propose a modification to the classic distillation method, where a
"student" network tries to imitate the uncompressed one under the supervision of
a fully converged discriminator network. They perform evaluation on multiple
tasks from image synthesis to super-resolution. They also study the influence
of the compression factor on the quality of the generated images.
The task is well motivated and situated in the related literature. The first
section is very thorough and extremely efficient at describing the failure modes
of existing methods. On one side, the results demonstrated in the evaluation are
compelling, on the other side, the compression factor is only 50%, which is much
lower than seen in related work. However, as it is shown in section 3 the task
may be much harder for GANS than regular models so I still consider it a
sizeable contribution.
There are a couple of points that require clarification. I personally found the
description of the method (Section 4) rather confusing. It is clear
what "discriminative loss" is as it is the one used in every GAN.
Unfortunately, I could not understand what "generative loss" means in the general
case. An example is given for StarGAN in equation (7) and I have a rough idea of
what to choose for Style Transfer, Domain Translation, Super Resolution and
Image translation. Though, it is unclear to me what to use in the case of
image synthesis. The experiments clearly show that it is possible so I think
it is necessary to show how this framework is concretely applied to each task at
hand.
During training, the discriminator only ever saw pictures from the true distribution
and the distribution generated by the generator (at each of its training steps).
If I understood the framework properly, here, the compressed generator is trained
from a random initialization. The distribution it outputs is therefore
completely unknown and potentially non overlapping with either of the true or
the generator ones. In that case it is hard to predict what the discriminator
would do on completely out of distribution samples. I seems reasonable to
conjecture that it might consider them "true" because it was never trained on
them. Could you provide an explanation of why it is not a problem in practice?
Do you have to try multiple initializations? Is the generative
loss enough to force the compressed discriminator to match the support
of the distribution of the dense generator?
I think this paper is novel, tackles a hard task and presents compelling results
(albeit using very mild compression ratios). It should be accepted if some
clarifications are made in section 3.
Minor Remarks:
- Figures 2, 9, 42 and 43 are unreadable when printed with a regular color office
printer.
- It is unclear what it would take an extra 10% of the original number of epochs to train the compressed network. Why couldn't it be faster, or much longer? |
iclr_2020_BJl8ZlHFwr | Generalized zero-shot learning (GZSL) is the task of predicting a test image from seen or unseen classes using pre-defined class-attributes and images from the seen classes. Typical ZSL models assign the class corresponding to the most relevant attribute as the predicted label of the test image based on the learned relation between the attribute and the image. However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i.e., the domain bias problem. Recently, many methods have addressed this difficulty using a synthesis-based approach that, however, requires generation of large amounts of high-quality unseen images after training and the additional training of classifier given them. Therefore, for this study, we aim at alleviating this difficulty in the manner of the relation-based approach. First, we consider the requirements for good performance in a ZSL setting and introduce a new model based on a variational autoencoder that learns to embed attributes and images into the shared representation space which satisfies those requirements. Next, we assume that the domain bias problem in GZSL derives from a situation in which embedding of the unseen domain overlaps that of the seen one. We introduce a discriminator that distinguishes domains in a shared space and learns jointly with the above embedding model to prevent this situation. After training, we can obtain prior knowledge from the discriminator of which domain is more likely to be embedded anywhere in the shared space. We propose combination of this knowledge and the relationbased classification on the embedded shared space as a mixture model to compensate class prediction. Experimentally obtained results confirm that the proposed method significantly improves the domain bias problem in relation-based settings and achieves almost equal accuracy to that of high-cost synthesis-based methods. | Summary:
This paper proposes a relation-based ZSL model which can effectively alleviate the domain bias problem. To this end, first, the paper claims that a good relation-based ZSL model should consider two requirements -- modality invariance and class separability. And the paper designed Modality-invariant and Class-separable Multimodal VAE (MCMVAE) based on VAEs to meet the two aforementioned requirements. Next, the paper hypothesizes that the domain bias problem is due to the overlap between seen and unseen classes in the shared space, and explicitly introduced a discriminator to separate the two domains. The paper performs experiments on ZSL benchmark datasets and shows that the proposed method outperforms other relation-based methods. Besides, the domain discriminator which can be applied to other models demonstrates its effectiveness in reducing domain bias given the experimental results.
+Strengths:
1. Clear writing logic. The author clearly depicts how to get the final loss of the method step-to-step and the relationship with existing methods.
2. The version without the domain discriminator (i.e. MCMVAE) is similar to PSE and CADA-VAE as the author acknowledges. However, the domain discriminator has certain novelty and can be applied to other methods. The overlap among seen and unseen classes is an important problem (domain bias problem named by the author) and the add of the domain discriminator to distinguish whether a sample is from seen classes or unseen classes is reasonable, which can provide better class separability (among seen and unseen classes).
-Weaknesses:
1. Although the author claims that the proposed method is a relation-based method, it is strange that the proposed method is called xxVAE but in Table 2 it doesn't fall into synthesis-based methods (as CVAE-ZSL and CADA-VAE do). Although it is derived from VAE, the current method doesn't seem to be called a VAE any more (some of the regularizations of the VAE are relaxed). Also, are the two terms -- relation-based and synthesis-based -- first proposed by the author? Is there a clear boundary between those two groups of methods?
2. It is recommended that an additional figure that depicts the framework is added (similar to Figure 2 in CADA-VAE) to promote better understanding. Currently, the method part only contains formulas with many parameters, making it difficult to grasp the idea of the whole framework at first glance.
3. The novelty of this paper is somewhat limited while missing some relevant works, e.g.[r1, r2]. [r1] learns a latent space where the compactness within class and separateness between classes are considered. [r2] uses a two-stage prediction for GZSL.
[r1] Jiang et al. Learning Discriminative Latent Attributes for Zero-Shot Classification. In IEEE ICCV 2017.
[r2] Zhang et al. Model Selection for Generalized Zero-shot Learning. In arXiv 2018.
4. It is a question whether the seen and unseen classes can be separated (Whether a two stage process is correct?). The key for ZSL is knowledge transfer and the base is that seen and unseen classes are related [r3]. If they are separated, can one use the model trained on seen classes to recognize the unseen classes? This is quite problematic. Besides, in Tab.2 there lacks of necessary comparisons with recent relation-based approaches e.g.[r3][r4], which makes the evaluation less sufficient.
[r3] Jiang et al. Transferable Contrastive Network for Generalized Zero-Shot Learning. In IEEE ICCV 2019.
[r4] Li et al. Discriminative Learning of Latent Features For Zero-Shot Recognition. In IEEE CVPR 2018.
5. Some unclear/incorrect descriptions of the method:
5.1) The formulation of GZSL is incorrect. Y= union(y_s y_u), but not intersection(y_s y_u)
5.2) How is the class separation formulated in the framework?
5.3) In Sec.3.2, why is the log-likelihood of the generative models can be obtained by the L1 loss?
Minor issues:
1. Better use vectorgraphs for clear view (especially for Figure 3 and 4).
2. Incomplete reference: for Probabilistic semantic embedding (PSE), the reference should add the conference information.
3. Grammar and spelling mistakes:
[1] Content in Figure 2 (not caption): unseen class -> unseen classes
[2] Last line in 4.1: MCVAE-D -> MCMVAE-D
[3] Last paragraph in 4.2: close -> stay close
[4] Last model name in Table 1: MCMVAE -> MCMVAE-D
4. The color bar for the contours at the rightmost of Figure 3 is not clear (not the standard way to draw a color bar, better refer to what a color bar is usually drawn).
5. If possible, better reduce the main text to 8 pages as recommended by the submission instructions (e.g. some content of the method part can be moved to the appendix?). |
iclr_2020_HkldyTNYwH | Current generative models like generative adversarial networks (GANs) and variational autoencoders (VAEs) have attracted huge attention due to its capability to generate visual realistic images. However, most of the existing models suffer from the mode collapse or mode mixture problems. In this work, we give a theoretic explanation of the both problems by Figalli's regularity theory of optimal transportation maps. Basically, the generator compute the transportation maps between the white noise distributions and the data distributions, which are in general discontinuous. However, deep neural networks (DNNs) can only represent continuous maps. This intrinsic conflict induces mode collapse and mode mixture. In order to tackle the both problems, we explicitly separate the manifold embedding and the optimal transportation; the first part is carried out using an autoencoder (AE) to map the images onto the latent space; the second part is accomplished using a GPU-based convex optimization to find the discontinuous transportation maps. Composing the extended optimal transport (OT) map and the decoder, we can finally generate new images from the white noise. This AE-OT model avoids representing discontinuous maps by DNNs, therefore effectively prevents mode collapse and mode mixture. | General Comments: The generator in Generative Adversarial Networks (GANS) computes an optimal transportation from the noise distribution to the data distribution. However, such maps are in general discontinuous. Since deep neural networks can only represent continuous maps, this brings two problems: mode collapse and mode mixture. This paper approaches both problems using Figalli's regularity theory. They separate the manifold embedding (here an autoencoder maps input data to a latent space) from the optimal transportation (this map is found by convex optimization). Composing these two steps yields the proposed method. Their method basically avoids representing discontinuous maps by the generator. Empirically, the proposed method performs similar or better than state-of-the-art.
I think the idea of the paper is nice, and an interesting perspective on GANs is presented. A new method is proposed. The numerical contributions are certainly significant. Therefore, I believe the paper deserves publication.
Nevertheless, I have some comments below.
1) Although this paper brings a new perspective, based on optimal transport theory, as far as I can understand this paper does not establish formal new results. Thus I think some strong claims about providing deep theoretical explanation should be more moderate. In essence, it seems that the paper verifies *numerically* (in section B.3) that Figalli's theorem (stated in Appendix B) holds in this context.
2) This is just a suggestion. I think in some parts a lighter notation and a more intuitive explanation could help.
3) After Eq. (5) in the Appendix the authors mention Newton's method, and Thm 3 is also specific to Newton's method. Then they mention that *Gradient Descent* is used (and in the main part of the paper they mentioned Adam). This is confusing. All these algorithms are different, and Newton's method does not imply convergence results for gradient descent. I don't see how Thm 3 is relevant.
4) This is a simple doubt. To avoid non-differentiability of the gradient, the OT step computes the Brenier potential and is able to locate the singularities. I wonder if using a simpler approach through optimization for nosmooth problems (such as Moreau envelopes or proximal methods) could resolve this issue? In the negative case, why not?
5) Some Minor comments:
1. Define OT in the abstract (Optimal Transportation?)
2. What is AE? (not defined also; Auto Encoder?)
3. There are lots of typos through the text, such as missing "the", "a", etc.
and a couple mispelled words. I suggest the authors proofread the draft
more carefully.
4. pp. 4 ... what is a "PL convex function". PL is not defined. |
iclr_2020_SyxTZ1HYwB | Optimal sensor placement achieves the minimal cost of sensors while obtaining the prespecified objectives. In this work, we propose a framework for sensor placement to maximize the information gain called Two-step Uncertainty Network(TUN). TUN encodes an arbitrary number of measurements, models the conditional distribution of high dimensional data, and estimates the task-specific information gain at un-observed locations. Experiments on the synthetic data show that TUN outperforms the random sampling strategy and Gaussian Process-based strategy consistently. | This paper describes a sensor placement strategy based on information gain on an unknown quantity of interest, which already exists in the active learning literature. As is well-known in the literature, this is equivalent to minimizing the expected remaining entropy. What the authors have done differently is to consider the use of neural nets (as opposed to the widely-used Gaussian process) as the learning models in this sensor placement problem, specifically to (a) approximate the expectation using a set of samples generated from a generator neural net and to (b) estimate the probability term in the entropy by a deterministic/inspector neural net. The authors have performed some simple synthetic experiments to elucidate the behavior and performance of their proposed strategy.
Conventionally, the sensor placement strategy is tasked to gather the most informative observations (given a limited sensing budget) for maximimally improving the model(s) of choice (in the context of this paper, the neural networks) so as to maximize the information gain. The authors seem to have adopted a different paradigm in this paper: Large training datasets are needed for the prior training of both neural nets (in the order of thousands as reported in the experiments). This seems to be defeating the original aim/objective of sensor placement, as described above. Consequently, it is not clear to me whether their proposed strategy would be general enough for use in sensor placement for a wide variety of environmental monitoring applications. Random sampling and GP-based sensor placement strategies do not face such a severe practical limitation.
The paper is also missing several important technical details and clarity of presentation is poor. For example,
(a) The configurations and training procedure of generator NN G and deterministic NN D for the experiments are not sufficiently described for each experiment.
(b) What do the authors do with the new observations obtained from placing the sensors in the last experiment? Do they adopt an open-loop sensor placement strategy?
(c) The setup for the last experiment is not clear. Is it still the same object classification task? Is the GP receiving an exclusive set of 4D features that are different from the other two methods? I get the impression that the classifiers are trained a priori. For the GP classifier, isn't it the case that one should gather the most informative observations to maximally improve its classification accuracy?
Though I like the authors' motivation of the setup of the x-ray baggage scanning system in security screening, what has really been done in their experiments appears to be still quite far from this real-world setup. Furthermore, their proposed strategy has been used to gather only 1 to 4 observations. More extensive empirical evaluation with real-world datasets (inspired by realistic problem motivation) is necessary.
Fig. 2: I find it surprising that with a single observation, it is possible to generate the instance/imagined spectrum in orange that resembles that of the true spectrum. Similarly, with 3 observations, all 10 instances/imagined spectrums can exhibit the first spike (without observations on it). Can the authors explain this phenomena?
Minor issues
The authors need to put a space in front of all opening round bracket. Other formatting issues exist.
Equation 3: v_k is missing from the conditioned part on the lefthand side of the equation.
Page 3: to estimates?
Page 3: evidence lower bond?
Figure 2 appears on page 3 and is only referenced on page 4.
Algorithm 1: The use of subscript j in x^m_j to represent an unobserved location confuses with that of subscript k in x^m_k to denote the time step.
Figure 3 captions: adapted on the measurements?
Figure 4 captions: corner feature occurred? |
iclr_2020_rkeO-lrYwr | We introduce instability analysis, a framework for assessing whether the outcome of optimizing a neural network is robust to SGD noise. It entails training two copies of a network on different random data orders. If error does not increase along the linear path between the trained parameters, we say the network is stable. Instability analysis reveals new properties of neural networks. For example, standard vision models are initially unstable but become stable early in training; from then on, the outcome of optimization is determined up to linear interpolation. We leverage instability analysis to examine iterative magnitude pruning (IMP), the procedure underlying the lottery ticket hypothesis. On small vision tasks, IMP finds sparse matching subnetworks that can train in isolation from initialization to full accuracy, but it fails to do so in more challenging settings. We find that IMP subnetworks are matching only when they are stable. In cases where IMP subnetworks are unstable at initialization, they become stable and matching early in training. We augment IMP to rewind subnetworks to their weights early in training, producing sparse subnetworks of large-scale networks, including Resnet-50 for ImageNet, that train to full accuracy. | This paper empirically examines an interesting relationship between mode connectivity and matching sparse subnetworks (lottery ticket hypothesis).
By mode connectivity, the paper refers to a specific instance where the final trained SGD solutions are connected by a linear interpolation path without loss in test accuracy. When networks trained with SGD reliably find solutions which can be linearly interpolated without loss in test accuracy despite different data ordering, the paper refers to these networks as ‘stable.’
Matching sparse subnetworks refer to subnetworks within a full dense network that matches the test accuracy of the full network when trained in isolation.
The paper introduces a novel improvement on the existing iterative magnitude pruning (IMP) technique that is able to find matching subnetworks even after initialization by rewinding the weights. This allowed the authors to find matching subnetworks for deeper networks and in cases where it could not be done without some intervention in learning schedule.
The paper then finds a relationship that only when the subnetworks become stable, the subnetworks become matching subnetworks.
———
Although finding a connection between two seemingly distinct phenomena is novel and interesting, I would recommend a weak reject for the following two reasons:
1) The scope of the experiment is limited to a quite specific setting,
2) there are unsupported strong claims which need to be clarified.
———
1)
In the abstract the paper claims that sparse subnetworks are matching subnetworks only when they are stable, but the results are shown in a limited setting only at a very high sparsity.
They tested stability on the highest sparsity level at which there was evidence that matching subnetworks existed, but how would the result generalize to other sparsity levels?
With lower sparsity level (if weights are pruned less), is stability easier to achieve?
The paper also focused on cases where matching subnetworks were found by IMP, but matching subnetworks can also be found by other pruning methods.
As acknowledged in the limitations section, other relationships may exist between stability and matching subnetworks found by other pruning methods, or in different sparsity levels,
which could be quite different from this paper’s claim.
In order to address this concern, I think the paper needs to show how the same relationship might generalize to different sparsity levels,
or alternatively modify the claim (to what it actually shows) and highlight the significance of the connection between matching subnetworks and stability in this highly sparse subnetwork regime.
2)
As addressed above, in the Abstract and Introduction, the paper’s claims are very general about mode connectivity and sparsity, claiming in the sparse regime, “a subnetwork is matching if and only if it is stable.” However, the experiments only show it is true in a limited setting, focusing on specific pruning method and at a specific sparsity level.
Furthermore, the statement is contradicted in Footnote 7: “for the sparsity levels we studied on VGG (low), the IMP subnetwork is stable but does not quite qualify as matching“
There are also a few other areas where there are unsupported claims.
“Namely, whenever IMP finds a matching subnetwork, test error does not increase when linearly interpolating between duplicates, meaning the subnetwork is stable.”
-> Stability was tested only at one specific sparsity level, and it is not obvious it would be stable at all lower sparsity levels where IMP found matching subnetworks.
“This result extends Nagarajan & Kolter’s observation about linear interpolation beyond MNIST to matching subnetworks found by IMP at initialization on our CIFAR10 networks”
-> Nagarajan & Kolter’s observation about linear interpolation was on a completely different setup: using same duplicate network but training on disjoint subset of data, whereas in this paper it uses different subnetworks and trains it on full dataset with different data order.
Related to the first issue, I think some of these stronger claims can be modified to describe what the experiments actually show.
The relationship found between stability and matching subnetworks in the high sparsity regime is a valuable insight that I believe should be conveyed correctly in this paper.
———
I also have some minor clarification question and suggestions for improvement.
How was the sparsity level (30%) of Resnet-50 and Inception-v3 chosen in Table 1? (which was later used in Figure 5)
— In Figure 3 and 5, the y-axis “Stability(%)” is unclear and not explained how this is computed. I first thought higher amount of stability(%) was good but it doesn't seem to be true.
— The ordering of methods for plots could be more consistent. In some figures VGG-19 come first and then Resnet-20 while for others it was the other way around, which was confusing to read. (Also same for Resnet-50 and Inception-v3)
— There are same lines in multiple graphs, but the labeling is inconsistent, potentially confusing readers:
Figure 1: (Original Init, Standard) is the same as Figure 4: (Reset),
and Figure 1: (Random Reinit, Standard) is the same as Figure 4: (Reset, Random Reinit) |
iclr_2020_HJeTo2VFwH | Network pruning is a promising avenue for compressing deep neural networks. A typical approach to pruning starts by training a model and removing redundant parameters while minimizing the impact on what is learned. Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a pruning criterion called connection sensitivity. However, it remains unclear exactly why pruning an untrained, randomly initialized neural network is effective. In this work, by noting connection sensitivity as a form of gradients, we formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results. Moreover, we analyze the signal propagation properties of the resulting pruned networks and introduce a simple, data-free method to improve their trainability. Our modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Furthermore, we empirically study the effect of supervision for pruning and demonstrate that our signal propagation perspective, combined with unsupervised pruning, can be useful in various scenarios where pruning is applied to non-standard arbitrarily-designed architectures. | This paper analyzes how signals propagate through randomly initialized neural networks that have undergone a kind of pruning/sparsification. The pruning method utilizes a metric called 'connection sensitivity', which has been used in prior work and which measures the infinitessimal impact of turning off specific parameters. The distribution of singular values in the layer-to-layer Jacobian matrices for pruned networks becomes increasingly pathological as the depth increases. This observation motivates the concept of 'layerwise dynamical isometry' (LDI), a slight generalization of the concept of 'dynamical isometry' that has been studied in prior work. Several methods for approximately obtaining differing amounts of LDI are investigated in a series of in-depth experiments that show a strong correlation between increased signal propagation and improved trainability of sparse networks.
Although there have been numerous works studying dynamical isometry as a principle for initializing very deep networks, and many other works studying pruning methods after (and recently before) training, as far as I'm aware there has been no prior work that examines the intersection of these two directions. As such, I found the contributions of this paper to be novel and believe the results will be of interest to practitioners and theorists alike.
An important contribution of this paper is in identifying that a main difficulty in pruning networks at initialization comes from degradation of signal propagation, leading to poor or impossible training. The numerous well-thought-out experiments provide compelling evidence that the trainability of pruned networks is highly correlated with spectral measures of the networks' Jacobians.
The authors take this observation a step further by introducing a method for correcting the poor conditioning that can result from pruning. They show that enforcing Approximate Isometry on weights of the pruned connectivity pattern enables the pruned models to train much faster and often achieve better performance.
Finally, the authors look at two natural extensions of their analysis to designing new high-performing architectures to situations where labels are not present. Overall, I found this paper to have a detailed and thorough experimental analysis and to present nice new perspectives on pruning and signal propagation. |
iclr_2020_S1eRbANtDB | Clustering is an important part of many modern data analysis pipelines, including network analysis and data retrieval. There are many different clustering algorithms developed by various communities, and it is often not clear which algorithm will give the best performance on a specific clustering task. Similarly, we often have multiple ways to measure distances between data points, and the best clustering performance might require a non-trivial combination of those metrics. In this work, we study data-driven algorithm selection and metric learning for clustering problems, where the goal is to simultaneously learn the best algorithm and metric for a specific application. The family of clustering algorithms we consider is parameterized linkage based procedures that includes single and complete linkage. The family of distance functions we learn over are convex combinations of base distance functions. We design efficient learning algorithms which receive samples from an application-specific distribution over clustering instances and learn a near-optimal distance and clustering algorithm from these classes. We also carry out a comprehensive empirical evaluation of our techniques showing that they can lead to significantly improved clustering performance on real-world datasets. | In this paper, the authors propose an approach to learning combinations of (instance-wise) distance metrics and (cluster-wise) merge functions to optimally cluster instances from a particular data distribution. In particular, given a set of clustering instances (each of which is a set of instances from the domain and their cluster assignment), a set of distance metrics, and a set of merge functions, the proposed approach aims to learn a convex combination of the distance metrics and merge functions to reconstruct the given clusterings.
The paper has two main contributions. First, a PAC learning type of guarantee is given on the quality of the learned clustering approach. Second, an efficient data structure for identifying the convex combinations is given. A small set of experiments suggests that, in practice, the learned combinations can outperform using single distance metrics and merge functions.
Comments
I am not an expert in this area; I had trouble following the details of the theoretical developments. However, I appreciated that intuition was given on both what the theorems and lemmas were showing as well as the main steps of the proofs.
Concerning Theorem 1, it is not exactly clear to me what the contribution is on top of [Balcan et al., 2019]. The text mentions that they already give sample complexity guarantees in what seems like the same setting (piecewise-structured cost function).
The authors point out that depth-first traversal is a good choice here due to its memory efficiency. However, in cases where the search space is a graph rather than a tree (i.e., there are multiple paths to some nodes), then DFS can exponentially increase the work compared to breadth-first or other search strategies (e.g., [Edelkamp and Schroedl, 2012]). While the name suggests that the “execution tree” is, indeed, a tree, is this guaranteed to be the case? or could multiple paths lead to the same partition?
For the experimental evaluation, it seems as though there is no “test” set of clustering instances. It would be helpful to also include performance of the learned combinations on some test clustering instances to give an idea of how generalizable to approach is to other instances within the data distribution. (Of course, the main contributions of this work are the theoretical developments, so just one or two examples would be sufficient.)
For motivation, it would be helpful to give some examples where the prerequisites of this work are actually met; that is, cases where sufficiently large number of labeled cluster instances are available, but the generative mechanism of the clusters is not.
For context, it could be helpful to briefly mention how, if at all, the current results apply to widely-used clustering algorithms such as k-means or Gaussian mixture models.
Typos, etc.
The references are somewhat inconsistently formatted. Also, some proper nouns in titles are not capitalized (e.g., “lloyd’s families”).
“leaves correspond to” -> “leaves corresponding to”
What does the “big-Oh tilde” notation in Theorem 1 mean? |
iclr_2020_H1lXCaVKvS | We propose the technique of quasi-multitask learning (Q-MTL), a simple and easy to implement modification of standard multitask learning, in which the tasks to be modeled are identical. We illustrate it through a series of sequence labeling experiments over a diverse set of languages, that applying Q-MTL consistently increases the generalization ability of the applied models. The proposed architecture can be regarded as a new regularization technique encouraging the model to develop an internal representation of the problem at hand that is beneficial to multiple output units of the classifier at the same time. This property hampers the convergence to such internal representations which are highly specific and tailored for a classifier with a particular set of parameters. Our experiments corroborate that by relying on the proposed algorithm, we can approximate the quality of an ensemble of classifiers at a fraction of computational resources required. Additionally, our results suggest that Q-MTL handles the presence of noisy training labels better than ensembles. | This paper considers a regularization technique, derived from multi-task learning, where multiple models with some shared parameters are jointly trained to solve copies of the same task. The technique is well-motivated as an efficient alternative to ensemble learning. The method is validated for a BiLSTM NLP model, which is applied to several POS tagging and named entity recognition tasks. The power of the technique as a regularizer is also demonstrated in the case of highly noisy labels, surprisingly, even outperforming ensemble learning in this setting.
Despite this, my inclination is to reject the paper because of the substantial overlap with previous work and the limited scope of experiments.
My primary concern is that the proposed technique was previously introduced in [1]. This prior work is not acknowledged in the current paper. Perhaps it was overlooked because it is situated tightly in Multi-task Learning, whereas the present work is motivated mainly with respect to Ensembling.
Although the general method was introduced previously, the paper does have some key experimental differences that would be interesting to see explored further.
(1) The paper uses a hidden layer in the separate classification heads, whereas previous work only used a linear classifier. The intuition that more complex heads will yield more diverse models is clear, but it would be great to see experimental evidence that this complexity helps. The conclusion states that the computational overhead is “infinitesimal”; does increasing the complexity of the classifier trade cost for performance?
(2) This paper uses Eq. 3 to make predictions, whereas previous work found that this did not improve over simply using the best single prediction model, which makes prediction somewhat more efficient. Is there some experimental evidence that Eq. 3 leads to improvements?
(3) This paper considers the comparison to ensembling, whereas previous work only considered comparisons to single task and standard multitask learning. Additional experiments showing the advantages over ensembling could make this extension a significant contribution.
(4) This paper presents novel investigation of the regularization effects of the method, i.e., the resilience to noisy labels and the analysis of learned weight matrices. Is there a real problem where this resilience to noise will improve over ensembles, i.e., without randomly replacing labels? Such an experiment would make this point more compelling. Also, is there some underlying reason why the method outperforms ensembles in this case? Is it simply because the method is less expressive so cannot overfit?
In effect, if the paper could clearly show that (1) or other practical extensions lead to improvements over ensembling in settings where ensembling is commonly used, or enable ensembling in settings where vanilla ensembling fails (i.e., the case of noisy labels), then it could be a substantial contribution. The current scope of the experiments is too limited to conclusively show these points. For example, the technique can be applied to any architecture, but the experiments in the paper are limited to a single architecture; and additional experiments with architectures and tasks that commonly use ensembling would make the experiments more compelling, ideally with comparisons to external results.
Other minor comments:
- It would be good to see the number of model parameters for easy comparison, especially in table 2 with different value of k.
- It looks like the x and y axis labels are swapped in Figure 3; from the Figure it looks like STL gets higher accuracies.
- Figure 2 should say epochs instead of iterations.
[1] Meyerson, E. & Miikkulainen R. “Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing---and Back”, ICML 2018. |
iclr_2020_rylMgCNYvS | While counter machines have received little attention in theoretical computer science since the 1960s, they have recently achieved a newfound relevance to the field of natural language processing (NLP). Recent work has suggested that some strong-performing recurrent neural networks utilize their memory as counters. Thus, one potential way to understand the sucess of these networks is to revisit the theory of counter computation. Therefore, we choose to study the abilities of realtime counter machines as formal grammars. We first show that several variants of the counter machine converge to express the same class of formal languages. We also prove that counter languages are closed under complement, union, intersection, and many other common set operations. Next, we show that counter machines cannot evaluate boolean expressions, even though they can weakly validate their syntax. This has implications for the interpretability and evaluation of neural network systems: successfully matching syntactic patterns does not guarantee that a counter-like model accurately represents underlying semantic structures. Finally, we consider the question of whether counter languages are semilinear. This work makes general contributions to the theory of formal languages that are of particular interest for the interpretability of recurrent neural networks. | Summary
-------
The authors investigate (subclasses of) generalized counter machines with respect to their weak generative capacity, their ability to represent structure, and several closure properties. This is motivated by recent indications that LSTMs have comparable expressivity to counter machines, so that the formal properties of these machines might provide indirect insights into the linguistic suitability of LSTMs.
Evaluation
----------
I also reviewed this paper for SCiL a few months ago.
While I had major reservations back then, I am happy to provide a more positive evaluation this time as the authors have done some revisions that clear up many points of confusion.
I have to add two caveats, though.
First, I am a bit disheartened that the authors chose not to adopt many of the excellent changes suggested by another SCiL reviewer (who went way beyond the call of duty with their multi-page review).
Second, I did not have sufficient time to check all proofs for their correctness.
In many cases the strategies strike me as intuitively sound, but my intuition tends to miss edge cases.
Nonetheless, I think that this paper, albeit a bit of a gamble, would make for an interesting addition to the program.
1) Weakness: Link to neural networks still unclear
The central weakness of the paper is still the link between neural networks and counter automata.
Based on what is said in the paper, this is merely a conjecture at this point, not a well-established fact.
Without this link, the value of the paper is unclear.
If, however, this conjecture should turn out to be true, the paper would mark a very strong starting point for further exploration.
This makes it a gamble worth taking.
2) Strong results, but lack of examples
The results are not trivial and provide deep insights into the inner workings of counter machines.
In particular the fact that counter machines cannot correctly represent Boolean expressions reveals key limitations on their representational power.
The semilinearity result is less impressive because of how limited the machines are that it applies to, and I'm not sure that the proof provides a good basis for generalization to more complex machines.
The authors might consider removing this part to clear some space for examples, which are sorely needed.
The formalism is abstract and unfamiliar to most readers, and a few concrete examples would greatly strengthen the readers' intuition.
3) No investigation of linguistically important string languages
As the authors make claims about linguistic adequacy, it is surprising that there is no discussion of TALs, MCFLs or PMCLFs.
The grammar formalism of GPSG was abandoned because it was limited to context-free languages and could not handle those more complex language classes.
So if counter machines fail here, the issue of their linguistic adequacy is already decided without further probing semilinearity or representational power.
As far as I can tell, real-time counter machines cannot generate the PMCLF a^{2^n}, which is an abstract model of unbounded copying constructions in natural language (see Radzisnky on Chinese number names, Michaelis & Kracht on Old Georgian case stacking, and Kobele on Yoruba).
Nor is it obvious to me that counter machines can handle the copy language {ww | w \in \Sigma^*}, a model of crossing dependencies, although they can handle a^n b^n c^n (a TAL).
It should also be possible to generate the linguistically undesirable MIX language, which is a 2-MCFL but not a TAL.
Minor comments
--------------
- As noted in my SCiL review, your definitions still differ from those of Fischer et al. 1968. What is the reason for this?
- Theorem 3.1: \subsetneq would be clearer than \subset
- p4, typo: the the
- Proof of Theorem 3.2: Unless I misunderstand your modulo construction, your ICL only has resolution up to mod n. For instance, with mod 2 it can distinguish 2 from 3, but not 2 from 4. The CL can do that. Don't you need a second counter c_i' for each c_i, then, to keep track of how often you have wrapped around modulo n in c_i? That would still be incremental as you can never wrap around by more than 1 in any given update.
- Sec 6.1: in all those definitions, if should be iff
References
----------
@ARTICLE{Radzinski91,
author = {Radzinski, Daniel},
title = {Chinese Number Names, Tree Adjoining Languages, and Mild Context
Sensitivity},
year = {1991},
journal = {Computational Linguistics},
volume = {17},
pages = {277--300},
url = {http://ucrel.lancs.ac.uk/acl/J/J91/J91-3002.pdf}
}
@INPROCEEDINGS{MichaelisKracht97,
author = {Michaelis, Jens and Kracht, Marcus},
title = {Semilinearity as a Syntactic Invariant},
year = {1997},
booktitle = {Logical Aspects of Computational Linguistics},
pages = {329--345},
editor = {Retor{\'e}, Christian},
volume = {1328},
series = {Lecture Notes in Artifical Intelligence},
publisher = {Springer},
doi = {10.1007/BFb0052165},
url = {http://dx.doi.org/10.1007/BFb0052165}
}
@PHDTHESIS{Kobele06,
author = {Kobele, Gregory M.},
title = {Generating Copies: {A}n Investigation into Structural Identity in
Language and Grammar},
year = {2006},
school = {UCLA},
url = {http://home.uchicago.edu/~gkobele/files/Kobele06GeneratingCopies.pdf}
} |
iclr_2020_Skx6WaEYPH | In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis. We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks. We then demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components. Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks. Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks. Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 80-96% of the adversarial samples generated by these methods without introducing significant performance degradation (less than 2%) on the original clean images. | The paper proposes an approach for improving robustness of already trained artificial neural networks with relu activation functions. The main motivation comes from signal processing where robustness is typically obtained via averaging moduli of Fourier coefficients over some frequency band (e.g., mel-frequency coefficients and deep scattering spectrum are based on this principle). The strategy amounts to sampling several random direction vectors in a ball of constant radius centered at a training example and averaging their predictions. The empirical estimate of the expected predictor value over the ball centered at a training example is used as its hypothesis value.
The approach is introduced by first expressing an artificial neural network with relu activations as a piecewise linear function (Definition 2.1, Lemma 2.2, Theorem 2.3). The paper then makes an observation that the instance space can be further sub-divided such that the network can be written as a sum of piecewise functions defined over regions given by positive linear combinations of linearly independent vectors (Definition 2.4, Theorem 2.5). A linear transformation of the instance space then allows for writing that function in canonical basis and computing its Fourier transform (Lemma 2.6, Theorem 2.7). The paper then makes an observation that the inverse of the linear transform used for making a change of basis (mapping to the canonical basis) can introduce instabilities to piecewise linear components defined over small regions (the inverse matrix appears in the Fourier transform). To address this instability, the paper relies on prior work by Jiang et al. (1999, 2003) and assigns the expected value over some ball centered at a training example as its prediction (which should be equivalent to performing averaging over bands in the Fourier domain). The integral is analytically intractable and, thus, the authors do an empirical estimate by averaging values in different random directions around the example. The experiments show that the method does not exhibit any serious instability with respect to the number points selected in that way.
The strategy does not require any additional training of the network and can be easily applied to already trained models with relu activations.
In the experiments, the approach is evaluated on standard adversarial attacking mechanisms: fast gradient sign method (Goodfellow et al., 2014), projected gradient descent method (Kurakin et al., 2016; Madry et al., 2017), deep fool attack method (Moosavi-Dezfooli et al., 2016) and L_2 method (Carlini & Wagner, 2017). I am not very familiar with the related work but this seems to be sufficient number of baselines to assess the effectiveness of the approach. The experiments are performed on ImageNet and Cifar10 datasets and show that the approach can make standard ResNet models more robust to the listed attacking strategies. It might be useful to test the approach with several different architectures (e.g., multi-layer perceptrons with different number of hidden layers, mix of convolutional and fully connected blocks etc).
In summary, the paper is well written and easy follow. The idea itself is simple but the intuition behind it is rather interesting and (to the best of my knowledge) provides novel insights into the workings of artificial neural networks with relu activations. |
iclr_2020_B1xmOgrFPS | Despite significant advances in object detection in recent years, training effective detectors in a small data regime remains an open challenge. Labelling training data for object detection is extremely expensive, and there is a need to develop techniques that can generalize well from small amounts of labelled data. We investigate this problem of few-shot object detection, where a detector has access to only limited amounts of annotated data. Based on the recently evolving meta-learning principle, we propose a novel meta-learning framework for object detection named "Meta-RCNN", which learns the ability to perform few-shot detection via meta-learning. Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data. This learning scheme helps acquire a prior which enables Meta-RCNN to do few-shot detection on novel tasks. Built on top of the Faster RCNN model, in Meta-RCNN, both the Region Proposal Network (RPN) and the object classification branch are meta-learned. The meta-trained RPN learns to provide class-specific proposals, while the object classifier learns to do few-shot classification. The novel loss objectives and learning strategy of Meta-RCNN can be trained in an end-to-end manner. We demonstrate the effectiveness of Meta-RCNN in addressing few-shot detection on Pascal VOC dataset and achieve promising results. | The paper proposes a method for few-shot object detection (FSOD), a variant of few-shot learning (FSL) where using a support set of few training images for novel categories (usually 1 or 5) not only the correct category labels are predicted on the query images, but also the object instances from the novel categories are localized and their bounding boxes are predicted. The method proposes a network architecture where the sliding window features that enter the RPN are first attenuated using support classes prototypes discovered using (a different?) RPN and found as matching to the few provided box annotations on the support images. The attenuation is by channel wise multiplication of the feature map and concatenation of the resulting feature maps (one per support class). After the RPN, ROI-pooling is applied on the concatenated feature map that is reduced using 1x1 convolution and original feature map (before attenuation) being added to the result. Following this a two FC layer classifier is fine-tuned on the support data to form the final
RCNN head of the few-shot detector. The whole network is claimed to be meta-trained end to end following COCO or ImageNet (LOC? DET?) pre-training. The method is tested on a split of PASCAL VOC07 into two sets of 10 categories, one for meta-training and the other for meta-testing. In addition, experiments are carried out on ImageNet-LOC animals subset. In both cases, the result are compared to some baselines, and some prior work.
Although FSOD is an important emerging problem, and advances on it are very important, I believe there are still certain gaps in the current paper that need to be fixed before it is accepted. Specifically:
1. Some important details are missing from the description. For example, detectors are usually trained on high resolution images (e.g. 1000 x 1000) and hence are problematic to train with large batches, yet in the proposed approach it is claimed that the proposed model is meta-trained with batch size 5 on 5 way tasks with 10 queries each, so even in 1-shot case, does it mean that 5 x 15 = 75 high resolution images enter the GPU at each batch? I doubt that even in parallel mode with 5 GPUs and 15 high res image per GPU it is possible for claimed backbone architectures (ResNet-50 and VGG16).
As another example, the details of fine-tuning during meta-training seem to be left out, is the model optimized with an inner loop? Details of the RPN that is used to select the support categories prototypes are not specified, where it comes from and how is it trained (clearly as the "main" RPN relies on attenuated features, it cannot be it)? Some additional technical details are not very clear and hinder the reproducibility of the paper (no code seem to be promised?), in general I suggest the authors to improve the writing and clarity of the paper.
2. In VOC07 experiment, FRCN-PN is very vaguely described and being claimed that it stands for RepMet (Karlinksy et al., CVPR 2019). It is not clear what it is and its training procedure on VOC07 is not clearly described.
It is also claimed in ImageNet experiment that the real RepMet is "more carefully designed then FRCN-PN" and has a better backbone, hence it is not clear why FRCN-PN should stand for it.
I suggest the authors to either do a direct comparison or remove their claim of comparison.
3. RepMet paper has proposed an additional benchmark on ImageNet-LOC with 5-way 1/5/10-shot episodes, and afaik it is reproducible as its code is released, so I am wondering as to why it was not used for
evaluation given that the authors made the effort of reproducing another ImageNet-LOC test on the same categories? It should be evaluated for a fair comparison.
4. Although they don't strictly have to compare to it, I am wondering if the authors would be willing to relate to a similar approach that was proposed for the upcoming ICCV 19:
"Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning", by Yan et al. Their approach is more similar to RepMet in a sense that the meta-learning is done in the classifier head,
and better results are reported on VOC07 benchmark (and except for 1-shot, higher results are reported for the 3 and 5 shot FRCNN fine-tuning). |
iclr_2020_BkgM7xHYwH | Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix. This class of models is particularly effective to solve tasks that require the memorization of long sequences. We propose an alternative solution based on explicit memorization using linear autoencoders for sequences. We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks. We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution. The initialization schema can be easily adapted to any recurrent architecture. We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training. The empirical analysis shows that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT. | Summary:
The paper proposes an autoencoder-based initialization for RNNs with linear memory. The proposed initialization is aimed at helping to maintain longer-term memory and instability during training such as exploding gradients (due to linearity).
Pros:
1. The paper is well written, the motivation and methods are clearly described.
Cons.
1. The authors claimed the proposed method could help with exploding gradient in training the linear memories. It would be helpful to include some experiments indicating that this was the case (for the baseline) and that this method does indeed help with this problem.
2. The experiments on the copy task only showed results for length upto 500, which almost all baseline models are able to solve. I am not too sure how the proposed initialization helps in this case.
3. TIMNIT is a relatively small speech recognition dataset. The task/ dataset does not require long-term memorization. It is nice to see that the initialization helps in this case. However, it is still a little how this experiment corresponds to the messsage that the authors are attempting to deliver at the end of the introduction.
4. In general, it seems that the experiments could be more carefully designed to reflect the contributions of the proposed method. Some suggestions for future edits are, more analysis on gradients, maybe more experiments on the stability of training such as gradients could help.
Minor:
1. There are some confusions, on P2 "we can construct a simple linear recurrent model which uses the autoencoder to encode the input sequences within a single vector", I think the authors meant encode the input sequences into a sequence of vectors? Equation 1 and 2 suggest that there is a vector m^t per timestep (as oppose to having 1 for the entire sequence).
2. Although the copy task was used in ((Arjovsky et al., 2015), I believe the original task was proposed in the following paper and hence this citation should properly be the correct one to cite here,
Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation, 9(8):
1735–1780, 1997. |
iclr_2020_rJg76kStwH | Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrialscale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning. | The paper proposes to use graph neural networks (GNN) for inference in MLN. The main motivation seems to be that inference in traditional MLN is computationally inefficient. The paper is cryptic about precisely why this is the case. There is some allusion in the introduction as to grounding being exponential in the number of entities and the exponent being related to the number of variables in the clauses of the MLN but this should be more clearly stated (e.g., does inference being exponential in the number of entities hold for lifted BP?). In an effort to speed up inference, the authors propose to use GNN instead. Since GNN expressivity is limited, the authors propose to use entity specific embeddings to increase expressivity. The final ingredient is a mean-field approximation that helps break up the likelihood expression. Experiments are conducted on standard MLN benchmarks (UW-CSE, Kinship, Cora) and link prediction tasks. ExpressGNN achieves a 5-10X speedup compared to HL-MRF. On Cora HL-MRF seems to have run out of memory. On link prediction tasks, ExpressGNN seems to achieve better accuracy but this result is a bit difficult to appreciate since the ExpressGNN can't learn rules and the authors used NeuralLP to learn the rules followed by using ExpressGNN to learn parameters and inference.
Here are the various reasons that prevent me from rating the paper favorably:
- MLNs were proposed in 2006. Statistical relational learning is even older. This is not a paper where the related work section should be delegated to the appendix. The reader will want to know the state of inference and its computational complexity right at the very beginning. Otherwise, its very difficult to read the paper and appreciate the results.
- Recently, a number of papers have been tried to quantify the expressive power of GNNs. MLN is fairly general, being able to incorporate any clause in first-order logic. Does the combination with GNN result in any loss of expressivity? This question deserves an answer. If so, then the speedup isn't free and ExpressGNN would be a special case of MLN, albeit with the advantage of fast inference.
- Why doesn't the paper provide clear inference time complexities to help the reader appreciate the results? At the very least, the paper should provide clear time complexities for each of the baselines.
- There are cheaper incarnations of MLN that the authors should compare against (or provide clear reasons as to why this is not needed). Please see BoostSRL (Khot, T.; Natarajan, S.; Kersting, K.; and Shavlik, J. 2011. Learning Markov logic networks via functional gradient boosting. In ICDM) |
iclr_2020_SkxHRySFvr | Recent semi-supervised learning methods have shown to achieve comparable results to their supervised counterparts while using only a small portion of labels in image classification tasks thanks to their regularization strategies. In this paper, we take a more direct approach for semi-supervised learning and propose learning to impute the labels of unlabeled samples such that a network achieves better generalization when it is trained on these labels. We pose the problem in a learning-to-learn formulation which can easily be incorporated to the state-of-the-art semi-supervised techniques and boost their performance especially when the labels are limited. We demonstrate that our method is applicable to both classification and regression problems including image classification and facial landmark detection tasks. | This paper uses a meta-learning approach to solve semi-supervised learning. The main idea is to simulate an SGD step on the loss of the meta-validation data and see how the model will perform if the pseudo-labels of unlabelled data are perturbed. Experiments on classification and regression problems show that the proposed method can improve over existing methods. The idea itself is intriguing but the derivation and some design choice are not very well-explained.
(1) The derivation from Eq.(3) to (4) is confusing. Note that in Eq.(3), the prediction \Phi_\theta also depends on \theta in addition to the pseudo-label z. When taking a step of SGD, the second term of Eq.(3) (with unlabelled data) will always be zero if both arguments of the loss (\Phi_\theta(x) and z_\theta(x)) change simultaneously. Eq.(4) somehow only considers the gradient of unsupervised loss, then the gradient would be zero because there is no incentive to deviate from the pseudo-label z. The pseudo-code does not help much. The update from \hat{\theta}^{t} to \hat{\theta}^{t+1} has the same issue: there is no incentive for \hat{\theta}^{t} to deviate because z is exactly produced by it.
(2) For classification problems, it is natural to use cross-entropy loss for the probability vector z. Are there any specific reasons for using Gumbel-softmax? In addition, using L2 loss for probability vectors (as mentioned in Appendix A) is known to be problematic as it may create exponentially many local minima (Auer et al, 1996).
(3) The recent work of Li et al. (2019) also considers iteratively improving pseudo-labels with meta-updates so it should be discussed and compared.
(4) Experiments
- What are the sizes of the meta-validation sets in the experiments?
- Error bars in the tables and Fig.2?
- The MM results in Table 2 are noticeably worse than the original results. For example, with 250 labeled data, MM achieved 11.08% in CIFAR-10 as reported in the original paper. (And 4000 labeled data can achieve 4.95%)
- It is said that option 2 is consistently better than option 1, which is not true for the MM baseline.
- 22500 training steps for Experiment 4 seems arbitrary. What are the candidates for the hyper-parameters?
Typos:
- In the first paragraph of Sec.2, one of the x and one of the y should be bold.
- Above Eq.(4), x^{U\in U} should be x^i \in U
- The transpose in Eq.(7) is not necessary
- It is said on page 6 that Fig.2 reports classification loss but the task is a regression problem.
Ref
- Auer, P., Herbster, M. and Warmuth, M.K., 1996. Exponentially many local minima for single neurons. In Advances in neural information processing systems (pp. 316-322).
- Li, X., Sun, Q., Liu, Y., Zheng, S., Chua, T.S. and Schiele, B., 2019. Learning to Self-Train for Semi-Supervised Few-Shot Classification. In Advances in neural information processing systems. |
iclr_2020_rkl_f6EFPS | The loss of a few neurons in a brain rarely results in any visible loss of function. However, the insight into what "few" means in this context is unclear. How many random neuron failures will it take to lead to a visible loss of function? In this paper, we address the fundamental question of the impact of the crash of a random subset of neurons on the overall computation of a neural network and the error in the output it produces. We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting. We give provable guarantees on the robustness of the network to these crashes. Our main contribution is a bound on the error in the output of a network under small random Bernoulli crashes proved by using a Taylor expansion in the continuous limit, where close-by neurons at a layer are similar. The failure mode we adopt in our model is characteristic of neuromorphic hardware, a promising technology to speed up artificial neural networks, as well as of biological networks. We show that our theoretical bounds can be used to compare the fault tolerance of different architectures and to design a regularizer improving the fault tolerance of a given architecture. We design an algorithm achieving fault tolerance using a reasonable number of neurons. In addition to the theoretical proof, we also provide experimental validation of our results and suggest a connection to the generalization capacity problem. | This contribution studies the impact of deletions of random neurons on prediction accuracy of trained architecture, with the application to failure analysis and the specific context of neuromorphic hardware. The manuscript shows that worst-case analysis of failure modes is NP hard and contributes a theoretical analysis of the average case impact of random perturbations with Bernouilli noise on prediction accuracy, as well as a training algorithm based on aggregation. The difficulty of tight bounds comes from the fact that with many layers a neural network can have a very large Lipschitz constant. The average case analysis is based on wide neural networks and an assumption of a form of smoothness in the values of hidden units as the width increases. The improve fitting procedure is done by adding a set of regularizing terms, including regularizing the spectral norm of the layers.
The robustness properties can be interesting to a wider community than that of neuromorphic hardware. In this sense, the manuscript provides interesting content, although I do fear that it is not framed properly. Indeed, the introduction and conclusion mention robustness as a central concern, which it is indeed, but the neuron failures are quite minor in this respect. More relevant questions would be: is the approach introduced here useful to limit the impact of adversarial examples? Does it provide good regularizations that improve generalization? I would believe that the regularization provided are interesting by looking at their expression; in particular the regularization of the operator norm of layers makes a lot of sense. That said, there is some empirical evidence that batch norm achieves similar effects, but with a significantly reduced cost.
Another limitation of the work is that it pushes towards very wide networks and ensemble predictions. These significantly increase the prediction cost and are often frowned upon by applications.
It seems to me that the manuscript has readability issues: the procedure introduced is quite unclear and could not be reimplemented from reading the manuscript (including the supplementary materials). Also, the results in the main part of the manuscript are presented to tersely: I do not understand where in table 2 dropout is varied.
The contributed algorithm has many clauses to tune dynamically the behavior of the regularizations and the architecture. These are very hard to control in theory. They would need strong empirical validation on many different datasets.
It is also very costly, as it involves repeatedly training from scratch a neural network.
The manuscript discusses in several places a median aggregation, which gives robustness properties to the predictor. I must admit that I have not been available to see it in the algorithm. This worries me, because it reveals that I do not understand the approach. The beginning of section 6.1, in the appendix, suggests details that are not understandable from the algorithm description.
Finally, a discussion of the links to dropout would be interesting: both in practice, as dropout can be seen as simulating neuron failure during training, as well as from the point of view of theory, as there has been many attempts to analyze theoretically dropout (starting with Wager NIPS 2013, but more advanced work is found in Gal ICML 2015, Helmbold JMLR 2015, Mianjy ICML 2018). |
iclr_2020_r1xNJ0NYDH | The goal of this paper is to study why typical neural networks train so fast, and how neural network architecture affects the speed of training. We introduce a simple concept called gradient confusion to help formally analyze this. When confusion is high, stochastic gradients produced by different data samples may be negatively correlated, slowing down convergence. But when gradient confusion is low, data samples interact harmoniously, and training proceeds quickly. Through novel theoretical and experimental results, we show how the neural net architecture affects gradient confusion, and thus the efficiency of training. We show that increasing the width of neural networks leads to lower gradient confusion, and thus easier model training. On the other hand, increasing the depth of neural networks has the opposite effect. Finally, we observe empirically that techniques like batch normalization and skip connections reduce gradient confusion, which helps reduce the training burden of very deep networks. | This paper introduces the concept of "gradient confusion" to explain why neural networks train fast with SGD. They also study the effects of width, depth on gradient confusion.
- The theoretical results assume that the data is sampled from a sphere and do not really give much insight into the effect of width and depth.
- There are some confounding factors in the experiments and there needs to be a better comparison to some related work.
Detailed review below:
Section 1:
- Please clarify how "gradient confusion" relate to the interpolation conditon of Ma et al, 2017 and the strong growth condition of Vaswani et al?
- If we run SGD with a constant step-size, it will bounce around the optimal point in a ball with radius that depends on the step-size. If I keep decreasing the step-size, this radius shrinks. How does gradient confusion relate to the step-size? Is it upper-bounded by a quantity that depends on the step-size and the batch-size?
Section 2:
- Definition 2.1: Why should this condition hold for "all" points w? Isn't it necessary only at w^* or in a small neighborhood around it?
- The gradient confusion parameter \eta should depend on the batch-size. Please clarify this.
- Figure 1: Previous work (Ma et al, Vaswani et al, Gunasekhar, 2017) all have shown that fast convergence can be obtained using SGD with a constant step-size with over-parametrized models and explained it using interpolation. What is the additional insight from gradient confusion?
- "Suppose that there is a Lipschitz constant for the Hessian" - This is a strong assumption and a vague argument, that is confusing rather than insightful. Please justify why this is a valid assumption for neural network models.
Section 3:
- If E_i || \nabla f_i(w) ||^2 = O(\epsilon) => gradient confusion = O(\epsilon). Isn't the E_i || \nabla f_i(w) ||^2 exactly the strong growth condition in Vaswani, et al. Can the gradient confusion results be directly derived from the results in that paper? Please compare. Also compare and cite "Stochastic Approximation of Smooth and Strongly Convex Functions:
Beyond the O(1/T ) Convergence Rate", COLT 2019.
Section 4:
- Please compare against the previous results that assumed the data to be sampled from a sphere.
- Thm 4.1: The theorem bounds the probability that gradient confusion holds for a given \eta. But the bounds of section 3 are vacuous even if the theorem holds with probability one, but for a large value of \eta. There needs to be an upper bound on \eta. Please clarify this.
- Please compare against the results of this paper by Arora et al: "On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization"
- For the effect of layer width, the analysis is only for the initializated weights and does not consider the optimization, which is what the paper claimed in the introduction. Am I missing something? Please justify this. The gradient confusion can decrease as the optimization progresses?
Section 5:
- "We reduce the learning rate by a factor of 10" But all the theory is for a constant step-size. Please explain this discrepancy.
- in Figure 2, in the second figure, why is there a sharp full in the pairwise cosine similarities.
- In all these experiments, explain why the batch-size and the step-size is not a confounding factor? |
iclr_2020_BkxGAREYwB | We propose a novel method to estimate the parameters of a collection of Hidden Markov Models (HMM), each of which corresponds to a set of known features. The observation sequence of an individual HMM is noisy and/or insufficient, making parameter estimation solely based on its corresponding observation sequence a challenging problem. The key idea is to combine the classical ExpectationMaximization (EM) algorithm with a neural network, while these two are jointly trained in an end-to-end fashion, mapping the HMM features to its parameters and effectively fusing the information across different HMMs. In order to address the numerical difficulty in computing the gradient of the EM iteration, simultaneous perturbation stochastic approximation (SPSA) is employed to approximate the gradient. We also provide a rigorous proof that the approximated gradient due to SPSA converges to the true gradient almost surely. The efficacy of the proposed method is demonstrated on synthetic data as well as a real-world e-Commerce dataset. | The authors propose to use numerical differentiation (using random perturbation) to approximate the Jacobian of a particular update (essentially equations 5~7) which plays an important role in the estimation of HMMs. To do so, the authors provide first a concise intro to HMM models (well known stuff in S2), presenting the iteration in detail, jump into their model (cryptically presented in my opinion in S3) and then propose a numerical approximation scheme using SPSA (building upon literature from the 90's, with Theo 1 being the main contribution), before moving onto experiments.
I have found the paper poorly presented. Its general motivation stands on a shaky ground (as illustrated by the choice of words by the authors, see below). In terms of presentation, reminders on HMM are welcome, but unfortunately the authors have not kept the same standard for clarity of notation in Section 3, which makes reading and understanding what the authors are doing quite difficult. Not being a specialist in this field, I have struggled a bit to understand the model itself, and the practical motivation of adding a DNN in the middle of what is otherwise an unrolled back-and-forth between k steps of EM estimation of transition parameters and the addition of a DNN layer. Despite the complexity in the story, what the authors propose is essentially to apply a numerical approximation scheme for Jacobians of these EM updates instead of backprop. Since this is the crux of the contribution, I feel some more numerical evidence that their approach works compared to baselines (e.g. Hinton et al 2018) is needed. For these reasons my assessment is a bit on the lower side.
- parenthesis bug in b_j(... in Eq.4
- in Eq. 5, index i appears both in numerator (as regular index) and denominator (as sum index)
- what is \Psi in Eq.8 ?
- "While HMM is arguably less prevalent in the era of deep learning": odd way to start an intro. All papers cited date back to more than 2011, 2 in 2006, all the rest in 20th century. This is particularly strange given the few citations to papers >2015 in Section 5.
- the observation sequence o_{t,1:T(u)} is "weakly" indexed by u (since T(u) is just a length)
- What is the \forall u notation below Eq. 9?
- "the number of nodes required to build the forward and backward
probability in the computation graph of an automatic differentiation engine is on the order of O(T^2). Empirically we found this leads to intractable computation cost." since this is critical, where is this empirical evidence? this seems to be a storage problem and cannot be a complexity issue. There are ways to mitigate this problem by only storing partially information, I feel this comparison would add a lot of value to the authors' claim.
- Where is J^{(k)} defined (as opposed to \hat{J}^{(k)}) defined in Eq.14? |
iclr_2020_rkg-TJBFPB | Exploration in sparse reward environments remains one of the key challenges of model-free reinforcement learning (RL). Instead of solely relying on extrinsic rewards provided by the environment, many state-of-the-art methods use intrinsic rewards to encourage the agent to explore the environment. However, we show that existing methods fall short in procedurally-generated environments where an agent is unlikely to ever visit the same state more than once. We propose a novel type of intrinsic exploration bonus which rewards the agent for actions that change the agent's learned state representation. We evaluate our method on multiple challenging procedurally-generated tasks in MiniGrid, as well as on tasks used in prior curiosity-driven exploration work. Our experiments demonstrate that our approach is more sample efficient than existing exploration methods, particularly for procedurally-generated MiniGrid environments. Furthermore, we analyze the learned behavior as well as the intrinsic reward received by our agent. In contrast to previous approaches, our intrinsic reward does not diminish during the course of training and it rewards the agent substantially more for interacting with objects that it can control. | This paper proposes a new intrinsic reward method for model-free reinforcement learning agents in environments with sparse reward. The method, Impact-Driven Exploration, learns a state representation of the environment separate from the agent to be trained, based on a combied forward and inverse dynamics loss. The agent is then separately trained with a reward encouraging sequences of actions that maximally change the learned state.
Like other latent state transition models (Pathak et al. 2017), RIDE learns a state representation based on a combined forward and inverse dynamics loss. However, Pathak et al. rewards the agent for taking actions that lead to large difference between the actual next state and the predicted next state. RIDE instead rewards the agent for taking actions that lead to a large difference between the actual next state and the current state. However, because rewarding one-step state differences may cause an agent to loop between two maximally-different states, the RIDE loss term is augmented with a state visitation count term, which decreases intrinsic reward for a state based on the number of times that state has been visited in the current episode.
The experiments compare RIDE to a selection of other intrinsic reward methods in the MiniGrid, Mario, and VizDoom environments. RIDE provides improved performance on a number of tasks, and solves challenging versions of the MiniGrid tasks that are not solved by other algorithms.
Decision: Weak Accept.
The main weakness of the paper seems to be a limitation in novelty.
Previous papers such as (Pathak et al. 2017) have trained RL policies using an implicit reward based on learned latent states. Previous papers such as (Marino et al. 2019) have used difference between subsequent states as an implicit reward for training an RL policy. It is not a large leap to combine these two ideas by training with difference between subsequent learned states. However, this paper seems to be the first to do so.
Strengths:
The experiments section is very thorough, and the visualizations of state counts and intrinsic reward returns are insightful.
The results appear to be state of the art for RL agents on the larger MiniGridWorld tasks.
The paper is clearly-written and easy to follow.
The Mario environment result discussed in section 6.2 is interesting in its own right, and provides some insight into previous work.
Despite the limited novelty of the IDE reward term, the experiments and analysis provide insight into the behavior of trained agents and the results seem to improve on existing methods.
Overall, the paper seems like a worthwhile contribution.
Notes:
In section 2 paragraph 4, "sintrinsic" should be "intrinsic".
In section 3, at "minimizes its discounted expected return," seems like it should be "maximizes".
The explanation of IMPALA (Espeholt et al., 2018) should occur before the references to IMPALA on page 5.
Labels for the axes in figures 4 and 6 would be helpful for readability.
The motivation for augmenting the RIDE reward with an episodic count term is that the IDE loss alone would cause an agent to loop between two maximally different states.
It would be interesting to know whether this suspected behavior actually occurs in practice, and how much the episodic count term changes this behavior.
It is surprising that in the ablation in section A.5, removing the state count term does not lead to the expected behavior of looping between two states, but instead the agent converges to the same behavior as without the state count term.
Also, in Figure 9, was the OnlyEpisodicCounts ablation model subjected to the same grid search described in A.2, or was it trained with the same intrinsic reward coefficient as the other models?
Based on the values in Table 4, it seems like replacing the L2 term with 1 without changing the reward coefficient would multiply the intrinsic reward by a large value. |
iclr_2020_BygPq6VFvS | The attention mechanism is an indispensable component of any state-of-the-art neural machine translation system. However, existing attention methods are often token-based and ignore the importance of phrasal alignments, which are the backbone of phrase-based statistical machine translation. We propose a novel phrasebased attention method to model n-grams of tokens as the basic attention entities, and design multi-headed phrasal attentions within the Transformer architecture to perform token-to-token and token-to-phrase mappings. Our approach yields improvements in English-German, English-Russian and English-French translation tasks on the standard WMT'14 test set. Furthermore, our phrasal attention method shows improvements on the one-billion-word language modeling benchmark. | This paper proposes an extension of the attention module that explicitly incorporates phrase information. Using convolution, attention scores are obtained independently for each n-gram type, and then combined. Transformer models with the proposed phrase attention are evaluated on multiple translation tasks, as well as on language modelling, generally obtaining better results than by simply increasing model size.
I lean towards the acceptance of the paper. The approach is fairly well motivated, likely easy to implement and results are mostly convincing. However, some claims may be too strong and I had difficulty understanding some parts of the approach.
I find the idea interesting. Standard attention is unbiased to distance (but sensitive to it because of positional embeddings). Phrasal attention may be a useful learning bias, giving particular importance to nearby words.
On 3 WMT'14 translation tasks, the proposed approach leads to improvements between 0.7 and 1.8 BLEU with respect to Transformer Base. Running each model with different random seeds and presenting statistical significance results would be ideal, but such runs can be expensive given the size of the datasets. Using phrasal attention appears to be more efficient than simply increasing model size. In addition to the number of parameters, the number of FLOPs per update might also be useful to know. Phrasal attention also leads to lower perplexity on a large-scale modeling task, although I can't confidently evaluate the importance of this result.
While results in the model interpretation section are cherry-picked, they illustrate that the model can use the additional capacity provided by phrasal attention. There is also clear qualitative differences between layers.
Some equations are confusing. For example, in Eq. 5, the right-most argument of Conv_n() appears to be of dimension nxdx1, but convolutions are defined for dimension nxdxd. I would suggest going over the presentation of phrasal attention carefully (or correct me if I interpreted the notation wrongly).
Some claims made in the paper may be too strong. While there are similarities between alignment and attention, they are not necessarily interchangeable in neural models. For example, (Koehn and Knowles. Six Challenges for Neural Machine Translation) show that they can be mismatched (Fig. 9).
Moreover, while input embeddings (and arguably the last decoder hidden layer) mostly contain token-level information, intermediate representations merge information from multiple positions. As such, at a given layer, it is not guaranteed the the i^{th} vector is a representation i^{th} token. For example, (Voita et al. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives) show that the mutual information between an input token and its corresponding encoder representation diminishes as depth increases. As such, neighbouring representations may not represent n-grams.
It would be appropriate to compare the proposed approach to (Hao et al. Multi-Granularity Self-Attention for Neural Machine Translation). However, this is very recent work (September 5 on ArXiv), so it would be understandable for the authors not to know about it.
Questions:
While searching for related work, I found an earlier submitted version of this paper ("Phrase-Based Attentions", submitted to ICLR 2019). The reported numbers differ from the current version. Why? |
iclr_2020_S1gEFkrtvH | The variational autoencoder, one of the generative models, defines the latent space for the data representation, and uses variational inference to infer the posterior probability. Several methods have been devised to disentangle the latent space for controlling the generative model easily. However, due to the excessive constraints, the more disentangled the latent space is, the lower quality the generative model has. A disentangled generative model would allocate a single feature of the generated data to the only single latent variable. In this paper, we propose a method to decompose the latent space into basis, and reconstruct it by linear combination of the latent bases. The proposed model called BasisVAE consists of the encoder that extracts the features of data and estimates the coefficients for linear combination of the latent bases, and the decoder that reconstructs the data with the combined latent bases. In this method, a single latent basis is subject to change in a single generative factor, and relatively invariant to the changes in other factors. It maintains the performance while relaxing the constraint for disentanglement on a basis, as we no longer need to decompose latent space on a standard basis. Experiments on the well-known benchmark datasets of MNIST, 3DFaces and CelebA demonstrate the efficacy of the proposed method, compared to other state-of-the-art methods. The proposed model not only defines the latent space to be separated by the generative factors, but also shows the better quality of the generated and reconstructed images. The disentangled representation is verified with the generated images and the simple classifier trained on the output of the encoder. | [updated rating due to supervision of $c_i$, which was not made clear enough and would require other baseline models]
This paper proposes a modification of the usual parameterization of the encoder in VAEs, to more allow representing an embedding $z$ through an explicit basis $M_B$, which will be pushed to be orthogonal (and hence could correspond to a fully factorised disentangled representation). It is however possible for different samples $x$ to use different dimensions in the basis if that is beneficial (i.e. x is mapped to $z = f(x) \cdot M_B$, where f(x) = (c_1, ... , c_n) which sums to 1.). This stretches the usual definition of what a “disentangled representation” means, as this disentanglement is usually assumed to be globally consistent, but this is a fair extension.
They show that this formulation can be expressed as a different ELBO which can be maximized as for usual VAEs.
I found this paper interesting, but I have one clarification that may modify my assessment quite strongly (hence I am tentatively putting it on the accept side). Some implementation details seem missing as well. Otherwise the presentation is fair, there are several results on different datasets which demonstrate the model's behaviour appropriately.
1. The main question I have, which may be rather trivial, is “are the c_i supervised in any way?”.
When I first read the paper, and looking at the losses in equations 9-11, I thought that this wasn’t the case (also considering this paper is about unsupervised representation learning), but some sentences and figures make this quite unclear:
a. In Section 3.2, you say “We train the encoder so that c_i = 1 and c_j = 0 if the input data has i-feature and no j-feature”. Do you?
b. How are the features in Figure 6 attached to each b_i?
I.e. how was “5_o_clock_shadow” attached to that particular image at the top-left?
If the c_i are supervised, this paper is about a completely different type of generative modeling than what it compares against (it would be more comparable to VQ-VAE or other nearest-neighbor conditional density models).
2. There is not enough details about the architecture, hyperparameter and baselines in the current version of the paper.
a. What n_x (i.e. dimensionality of the basis) do you use? How does this affect the results?
b. How exactly are f(x), \Sigma_f(x) parametrized? They mention the architecture of the “encoder” in Section 4.1, but this could be much clearer.
c. How do you train M_B? I assume they are just a fixed set of embeddings that are back-propagated through?
d. What are the details about the architecture of the baselines, and their hyperparameters? E.g. what is the beta you used for Beta-VAE?
3. The reconstructions seem only partially related to their target inputs (e.g. see Figure 4). This seems to indicate that instead of really reconstructing x, the model chooses to reconstruct “a close-by related \tilde{x}”, or even perhaps a b_i. This would make it behave closer to VQ-VAE, which explicitly does that. How related are reconstructions/samples to the b_i?
4. Could you show the distribution of c_i that the model learns, and how much they vary for several example images?
How “peaky” is this distribution for a given image (this feeds into to the previous question as well)?
The promise of the proposed model is that different images pick and choose different combinations of b_i, which hopefully one should see reflected in the distributions of c_i per sample, across clusters, or across the whole dataset.
5. What happens when L_B is removed? I.e. what is the effect of removing the constraint on M_B being a basis, and instead allow it to be anything? This seems to make it closer to a continuous approximation to VQ-VAE?
6. Is Equation 10 correct? Should the KL use N(f(x) \cdot M_B, \Sigma_f(x)), as in equation 9 above?
7. Similarly, in Section 4.2.3, did you mean “c_i = 1 and c_j = 0 for i != j”?
If the model happens to be fully unsupervised, I think that these results are quite interesting, and provide a good modification to the usual VAE framework, I find that having access to the M_B basis explicitly could be very valuable.
There is still an interesting philosophical discussion to be had about when one would like to obtain a “global basis” for the latent space (i.e. Figure 3 (b)), or when one would prefer more local ones. I can see clear advantages for a non-local basis, in terms of generalisation and compositionality, which your choice (i.e. Figure 3 (c) ) would prohibit.
References:
[1] VQ-VAE: Aaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu, “Neural Discrete Representation Learning”, https://arxiv.org/abs/1711.00937 |
iclr_2020_BkxoglrtvH | To understand how the brain and mind develop in infancy, it is necessary to develop testable computational models. In recent years, DNNs have proven valuable as models of the adult brain. In the domain of object recognition, for example, convolutional DNNs currently provide the best way to predict the response to a novel stimulus in the ventral visual stream of the adult monkey and adult human. Given this success in modelling the mature brain, we propose them as candidate models for the learning process. We tested if learning in different DNNs yields distinct trajectories of development at the large scale that could be testable in future brain imaging experiments. First, we quantified the development of explicit object representation at each layer in the network at various points in the learning process, by freezing the convolutional weights and training an additional linear decoding layer. In two DNNs trained in a supervised way (CORnet-S and Alexnet), object representation was strongest in the top layer throughout learning. Infants, however, have only extremely impoverished access to labels, and so unsupervised learning is likely a better model for their learning process. Infants are known to identify clusters of similar-looking objects, and so we evaluated an unsupervised DNN with a clustering objective (DeepCluster). Through learning, there was a bottom-up sweep in the development of object representations, with the penultimate layer strongest at the end. These results show that different training strategies yield different layerwise trajectories of development, which could be empirically measured in infants with neuroimaging. In all three DNNs, we also tested for a relationship in the order in which infants and machines acquire visual classes. We found different patterns, which could be driven by the different visual features that DNNs and humans exploit, or by differences in the objective function used for learning. Our results show that DNNs can make distinct and testable predictions for infant development. The parallel that we are creating highlights ways in which knowledge of the infant brain and mind could guide the design of new DNNs, such as by providing insight into the selection of auxiliary tasks for unsupervised training, from the myriad available. | Summary
The paper aims to understand how object vision develops in infancy and childhood by using deep learning models. In particular, it chooses two deep nets, CORnet and DeepCluster to measure learning. CORnet is supervised and is designed to mimic the architecture and representational geometry of the visual system. It tries to quantify the development of explicit object representations at each level of this network through training by freezing the convolutional layers and training an additional linear decoding layer. The paper evaluates the decoding accuracy on the whole ImageNet validation set for individual visual classes. DeepCluster differ in both supervision and in the convolutional networks. To isolate the effect of supervision, it ran a control experiment in which the convolutional network from DeepCluster (an AlexNet variant) is trained in a supervised manner. The paper tries to draw conclusions on how learning should develop across brain regions in infants. In all the networks, it also tested for a relationship in the order in which infants and machines acquire visual classes, and found only evidence for a counter-intuitive relationship.
Limitations
The topic is extremely interesting and worth intense study. However, the approach is not convincing. CORnet may have some relevances. It is not clear how well it models the representational geometry of the visual system. It is even less clear whether DeepCluster is relevant. Why would it be related to infant learning?
The whole idea of using DNN to infer biological learning is built on shaky ground given how little we know the learning mechanism of the brain. In particular, back propagation is not widely considered possible in biology. Given the learning mechanism may be very different. What is the basis of using DNN to study the infant learning?
The findings are also not very surprising and offer much for the community.
Given the paper lacks rigor and findings, it does not meet the bar of ICLR. |
iclr_2020_BJgza6VtPB | Traditional natural language generation (NLG) models are trained using maximum likelihood estimation (MLE) which differs from the sample generation inference procedure. During training the ground truth tokens are passed to the model, however, during inference, the model instead reads its previously generated samples -a phenomenon coined exposure bias. Exposure bias was hypothesized to be a root cause of poor sample quality and thus many generative adversarial networks (GANs) were proposed as a remedy since they have identical training and inference. However, many of the ensuing GAN variants validated sample quality improvements but ignored loss of sample diversity. This work reiterates the fallacy of quality-only metrics and clearly demonstrate that the well-established technique of reducing softmax temperature can outperform GANs on a quality-only metric. Further, we establish a definitive quality-diversity evaluation procedure using temperature tuning over local and global sample metrics. Under this, we find that MLE models consistently outperform the proposed GAN variants over the whole quality-diversity space. Specifically, we find that 1) exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training; 2) MLE trained models provide a better quality/diversity tradeoff compared to their GAN counterparts, all while being easier to train, easier to cross-validate, and less computationally expensive. | This paper concerns the limitation of the quality-only evaluation metric for text generation models. Instead, a desirable evaluation metric should not only measure the sample quality, but also the sample diversity, to prevent the mode collapse problem in gan-based models generation. The author presents an interesting, but not too surprising finding that, tuning the temperature beam search sampling consistently outperform all other GAN/RL-based training method for text generation models. The idea of sweeping temperature during beam search decoding is not new in the NLP community, which limits the novelty of this paper. What’s more, some parts of the experiment results is also somehow not new, in the sense that the SBLEU vs Negative BLEU tradeoff curve is also shown in [1,2,3,4].
[1] Jointly measuring diversity and quality in text generation models, 2019
[2] Training language gans from scratch, 2019
[3] On accurate evaluation of gans for language generation, 2018
[4] Towards Text Generation with Adversarially Learned Neural Outlines, 2018
I would love to increase my score if the author could address the following comments:
(1) Are the comparing methods, say MLE models and other GAN-based models, have the similar number of model parameters? It is not clear from the paper. Otherwise, one can use a 12/24 layer Transformer-XL to have dominative performance?
(2) Since this is an empirical study paper. It would be great if this paper can also present more SoTA models trained by MLE such as Transformer-XL on more challenging datasets, such as Wikitext-2 or Wikitext-103. In this kind of large vocabulary datasets, I think the RL/GAN-based training methods would easily breakdown, and far worse than MLE-based training.
(3) To make the empirical study more comprehensive, the author could perhaps evaluate with the n-gram and FED metric. |
iclr_2020_B1x9ITVYDr | We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in Bafna et al. (2018) to defend neural networks against 0 , 2 , and ∞ -norm attacks. In the case of 0 -norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For 2 -norm bounded noise, we provide recovery guarantees for BP, and for the case of ∞ -norm bounded noise, we provide recovery guarantees for a modified version of Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in Bafna et al. (2018) for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of 0 , 2 and ∞ -norm attacks. | This paper extends the compressive recovery defense framework introduced by Bafna et al. (2018), which is mainly against l_0 attacks, to l_2 and l_∞ attacks. They provide guarantees for some recovery algorithms in the case of different kinds of norm bounded noises. The difference between their work and the previous work is clearly clarified.
Overall, this paper is a follow-up work towards Bafna et al. (2018) but with better theoretical guarantees and ample experiment results to support their robustness against various popular attacks. Given their contribution and inspiration for future work, I think this paper could be accepted to the 2020 ICLR conference.
In Section 3.2 Recovery Algorithms, the author clearly states three algorithms including IHT, BP, DS, and their modification from the standard ones, but fails to compare the differences between these algorithms. It is not clear about the author’s motivations to proposes these different recovery algorithms and whether their performance varies from each other also remains unknown. Maybe some analysis about their disadvantages and advantages in varied conditions of attacks could be necessary.
In the Section 3.4 Comparison to Related Work, the author mentions many works aiming at defending against adversarial inputs. However, Bafna et al. (2018) is the only work here that has something to do with compressive sensing. I think maybe the paper should involve some related work here regarding theory of compressive coding besides Bafna et al. (2018). And how they are combined to the defense against adversarial inputs. It would help the readers to have better understanding towards the novelty and breakthroughs in this aspect.
For the experiments, it would be better to have the comparisons between the proposed algorithm and related methods. Also, the proposed IHT and DS are modified versions. What are the differences in experiments?
Minor comments:
- Page 2: the line above the equation 1 ‘meaning that x_t (k)…..’, it could be (x_t ) ̂(k)
- Page 6: in the explanation of figure 2, the adversarial inputs are second column and fifth column not fourth column. |
iclr_2020_HygtiTEYvS | We consider the problem of adapting an existing policy when the environment representation changes. Upon a change of the encoding of the observations the agent can no longer make use of its policy as it cannot correctly interpret the new observations. This paper proposes Greedy State Representation Learning (GSRL) to transfer the original policy by translating the environment representation back into its original encoding. To achieve this GSRL samples observations from both the environment and a dynamics model trained from prior experience. This generates pairs of state encodings, i.e., a new representation from the environment and a (biased) old representation from the forward model, that allow us to bootstrap a neural network model for state translation. Although early translations are unsatisfactory (as expected), the agent eventually learns a valid translation as it minimizes the error between expected and observed environment dynamics. Our experiments show the efficiency of our approach and that it translates the policy in considerably less steps than it would take to retrain the policy. | The authors propose a means to adapt to new state representations during reinforcement learning. The method works by learning a translation model that translates new state representations to old state representations. The authors evaluate the method on the MountainCar environment and show that the adaptation model is more efficient than training a new policy from scratch.
My concerns with this work are as follows:
- I don't find the type of changes to state representations very useful (e.g. Section 6, velocity *=2, position /= 2) nor practical. Moreover, learning a translation model to recover the old representation seems easy given this type of simple perturbations. This perturbation is fundamentally different than those used to motivate the problem (e.g. "a robot whose sensors break down" or "whose sensor outputs degrade", in the introduction).
- The authors only evaluate with one type of simple perturbation on one environment, hence I am skeptical regarding the generalizability of this work.
- The premise of this work is to not store old transitions (Section 1, the paragraph "the most simplistic idea is to..."), however this model does store old transitions because it uses prioritized experience replay. In this case there is no argument against using this data to train the translation model.
- A comparison that is missing from the paper is to fine-tune the existing model. I believe this is a more fair comparison in terms of sample-efficiency than training from scratch.
Other comments:
- For the title, to call the adaptation to slightly different state representations "policy adaptation" is a stretch.
- There are a lot of tangential information in the introduction on things like sample efficiency and model-free vs. model-based RL. This is distracting.
- The paper is excessively long. |
iclr_2020_HylznxrYDr | While many sentiment classification solutions report high accuracy scores in product or movie review datasets, the performance of the methods in niche domains such as finance still largely falls behind. The reason of this gap is the domainspecific language, which decreases the applicability of existing models, and lack of quality labeled data to learn the new context of positive and negative in the specific domain. Transfer learning has been shown to be successful in adapting to new domains without large training data sets. In this paper, we explore the effectiveness of NLP transfer learning in financial sentiment classification. We introduce FinBERT, a language model based on BERT, which improved the state-of-the-art performance by 14 percentage points for a financial sentiment classification task in FinancialPhrasebank dataset. | This paper described the application of BERT in the field of financial sentiment analysis. Authors find that when fine-tuned with in-domain data, BERT outperforms the state-of-the-art, demonstrating that language model pre-training can transfer knowledge learned from unsupervised large corpus to new domain with minimum effort. Experiments are conducted to explore 1) the utility of different in-domain dataset for further pre-training; 2) strategies to avoid catastrophic forgetting, and 3) effectiveness of fine-tuning a subset of the full model.
I am in favor of rejecting this paper and my reasons are as follows:
First, this paper may lack deeper innovation, although it demonstrates a good application of the BERT models in financial domain. For example, the framework of general-domain LM pretraining, to in-domain LM pretraining and finally in-domain classifier fine-tuning, as well as techniques of catastrophic forgetting were already proposed in Howard & Ruder 2018. Therefore, I think this paper may be more suitable for other (finance) application-oriented venues.
Second, the dataset used in evaluation is of small size (for example, Financial PhraseBank test set has one 1K). Thus, even though the paper is about transfer learning to domains without large data, I find it might be more convincing to draw a solid conclusion with a larger test set.
This paper is well organized and easy to follow. It may be beneficial to clarify in a few places (if space permits):
1) Some description or statistics of the data may be helpful (e.g., average sentence length or some examples);
2) Citations to Elmo and ULMFit can be made more explicit. Authors did cite Peters 2018 and Howard 2018 at the beginning of the paper, but may want to explicitly associate them with ‘Elmo’ and ‘ULMFit’ when these two terms first occur respectively;
3) For table 2, does the ‘all data’ or ‘data with 100% agreement’ include training data (80%) or just the test data (20%)?
The difference between FinBERT(-domain) and ULMFit can be explicitly contrasted in the paper. Is the former initialized with BERT while latter with ULMFit? |
iclr_2020_S1e0ZlHYDB | Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by using progressive compression to convert a single dataset into multiple datasets of increasing fidelity-all without adding to the total dataset size. Empirically, we implement PCRs and evaluate them on a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Our results show that different tasks can tolerate different levels of compression. PCRs use an on-disk layout that enables applications to efficiently and dynamically access appropriate levels of compression at runtime. In turn, we demonstrate that PCRs can seamlessly enable a 2× speedup in training time on average over baseline formats. | Summary: This paper introduces a new storage format for image datasets for machine learning training. The core idea is to use progressive JPEG to create sequential scans of the input image, from lower resolution to higher resolution. The authors found that on some datasets, using half of the scans is already enough to reach similar accuracy but speeded up the convergence by a factor of 2.
Detailed feedbacks:
- The paper presents a simple idea that directly uses the nature of JPEG compression. The paper shows that it can work well and can be potentially integrated into real machine learning dataset storage applications.
- Related work section is thorough.
- The experiments are limited to image classifications, and some of the datasets are subsampled (e.g. ImageNet and CelebA). This may not well represent real machine learning tasks, and practitioners may be unsure about the reliability of the compression. The “Cars” dataset contains fine-grained classification, in which the proposed method is
- Figure 1 is not very clear what is the key advantage of the proposed method, and what are the different mechanisms.
- Alternatively, one can subsample the pixels and store incremental subsets of those pixels. It would be good if the paper can discuss about this baseline.
- The data storage format is only loosely related to the main goal of the paper, which is to show that network can still train very well and even faster when receiving partial input data. Once they figured out the number of scans needed for this application, they don’t necessarily need to keep a full lossless version and can just go for a lossy version. In other words, the experiment section can be replaced by any other lossy compression by varying the compression ratio.
- In my opinion, there could be two reasons for faster convergence. 1) lowered image quality makes the data easier to learn and 2) the smaller data size allows faster reading of data from disk. The paper only shows wall-clock speed-up, but it is unclear which factor is bigger. 2) can be potentially addressed by faster disk reading such as SSD or in-memory datasets. One of the motivations is to help parallel training of dataset and it is also mentioned how non-random sampling of data can hurt training performance. It would be good to showcase how the proposed method can help in those parallel training settings.
Conclusion: This paper presents a simple and effective idea and can be potentially beneficial. However, my main concern is whether the experiments can be representative enough for large scale experiments (e.g. using non-subsampled ImageNet dataset with parallel training using SSD storage). Therefore, my overall rating is weak accept. |
iclr_2020_BJe4JJBYwS | In recent years we have witnessed tremendous progress in unpaired image-toimage translation methods, propelled by the emergence of DNNs and adversarial training strategies. However, most existing methods focus on transfer of style and appearance, rather than on shape translation. The latter task is challenging, due to its intricate non-local nature, which calls for additional supervision. We mitigate this by descending the deep layers of a pre-trained network, where the deep features contain more semantics, and applying the translation between these deep features. Specifically, we leverage VGG, which is a classification network, pre-trained on ImageNet with large-scale semantic supervision. Our translation is performed in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy: we first translate between the deepest layers that encode the higher-level semantic content of the image, proceeding to translate the shallower layers, conditioned on the deeper ones. We show that our method is able to translate between different domains, which exhibit significantly different shapes. We evaluate our method both qualitatively and quantitatively and compare it to state-of-the-art image-to-image translation methods. Our code and trained models will be made available. | * Summarize what the paper claims to do/contribute.
This paper claims to extend existing image translation works, like CycleGAN, to domain pairs that are not similar in shape. It is proposed to do so by using a VGG network trained on classification (I assume on Imagenet), extracting features from the two domains and learn 5 CycleGANs to translate for each level of the feature hierarchy. At each level of the hierarchy the translation from the previous level is used to condition the translation for the current level. During inference, the final image translation is done by "feature inversion" (a technique proposed in Dosovitsikiy and Brox, 2016) from the final feature layer. The technique is show on example from a number of pairs of domains like Zebra-to-Elephant (and back), Giraffe-to-Zebra (and back), Dog-to-Cat (and back) and is compared with a number of baselines qualitatively and quantitatively with the FID score.
* Clearly state your decision (accept or reject) with one or two key reasons for this choice.
Weak Reject.
Major reasons:
- The problem itself, as stated in the introduction, seems ill-posed to me. One of the struggles I had while looking through the results was to understand what the images should be looking like. ie What should a zebra translated to a giraffe look like? The motivation for such a problem is also not immediately clear either.
- Most of the resulting images do not seem "translated" to me. As stated in the paper (end of p.2) "one aims to transform a specific type of object without changing the background." As one can see in eg Fig. 1 the resulting translations are completely different images with the foreground object of the new domain in roughly similar poses. The background in most cases does not persist. What I suspect is actually happening here is that the high-level semantics from the first image are used as some sort of noise to generate new images from the new domain. One question I had, for example: could we be getting similar results if we used the VGG bottleneck as the noise vector in an InfoGAN? Since the VGG network is pretrained and used in the same way in both domains, I imagine we would be seeing something very similar. (and it would be def. preferrable to tuning 10 GANs!)
* Provide supporting arguments for the reasons for the decision.
Some of the decisions made in the paper were unclear and not supported adequately. The questions (in rough order of importance) that made some of the contributions unclear to me:
- Why wasn't a final translator used for the final image, conditioned on the final \tilde{b}_1?
- Is the VGG network pretrained on ImageNet? Why wasn't another task used that could be retaining more of the relevant features? eg on semantic segmentation
- Could this be used for networks pretrained on other datasets? Presumably ImageNet has information about the animals translated in this paper. Even better, could we somehow learn these features for the domain pairs automatically somehow?
- How meaningful is the FID score really in this case?
- How were the 10 GANs tuned?
* Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.
- It is mentioned on p.4 that "clamping is potentially a harmful irreversible operation" but that harmful results were not observed. As I was reading that I was wondering how these results would actually look like.
- On p. 6 it is mentioned that the number of images for 2 categories are reported in another paper. I think it'd take less space to actually report the number of images here.
- On p.7 it is mentioned that the number of instances is preserved, however it should be made clear that it's is perserved in some (or most if that is what was observed) of the examples. |
iclr_2020_Skld1aVtPB | This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs. More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces "Subset Scanning" methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: "Which subset of inputs have largerthan-expected activations at which subset of nodes?" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are "weird together". Leveraging this common anomalous pattern we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack. | The paper is the first paper, in my knowledge, that introduces the problem of identifying anomalous (or corrupted) subset of data input to a neural network. The corrupted inputs are identified vis-a-vis a set of “clean” background set (e.g., the training/ validation data set). Experimental evaluation is performed on the problem of identifying the subset of noisy CIFAR-10 images created by adding targeted adversarial perturbations.
To achieve this, the problem is posed as that of subset detection and subset scanning approaches are adapted for the same. The activation of a node under consideration is converted to a p-value (how extreme the activation is, for that input data, w.r.t. the background set).
A goodness of fit is then defined for any subset using the Berk-Jones test statistic which is used to create an NPSS scoring function to be maximized over the set of all subsets of the product set – (set of nodes x set of input images). The authors suggest that this combinatorial problem can be addressed efficiently as NPSS has been shown before to have the linear-time subset scanning (LTSS) property (Neill 2012).
However, since the search is now over a product subspace and the LTSS property holds for individual subspaces (selection of activation nodes or input images while the other is held fixed), the LTSS property doesn’t trivially extend to the product space. The authors suggest an algorithm which alternatively iterates between the two LTSS steps. Since the above algorithm is not guaranteed to obtain the global optimum, multiple random restarts are suggested. This is the weakest part of the paper – neither is a formal proof of convergence provided, nor is the efficiency of the propped algorithm demonstrated empirically. Also, the performance of the algorithm (accuracy) can only be weakly used to judge the goodness of the local optimum obtained as it confounds the power of NPSS with the gap between the local and global optimum.
Contributions
In summary, the main contribution of the paper is the introduction of the problem of the identification of anomalous subsets of adversarially corrupted examples on deep neural networks and the first approach to address the problem via a NPSS scoring function that (partially) satisfies the LTSS property suggesting the existence of efficient (and exact?) algorithms for solving the problem. The work has incremental novelty as it seems to be largely based on prior art (not on DNNs). Initial results look promising.
Negatives
However, I have the following concerns:
- (Clarity) Mathematical formulation for the NPSS score function and the optimization problem to be solved should be clearly stated (and not inside paragraphs of text).
- The proposed algorithm is ad hoc - an obvious extension of the standard algorithm that exploits the LTSS property to the product of two set spaces. The algorithm may not find a good optimum. No proof, analysis or study of its properties – optimality gap, convergence, and, efficiency is presented.
- At deployment, the attacks may be non-targeted or the target may be different for different corrupted inputs. This will dilute the signature as the set of nodes where the ‘extreme values’ appear may not be the same across the corrupted examples. It is not clear if it is the case if the target is the same. In this case, subset scanning may not work. This assumption should’ve been directly tested.
- ‘Berk-Jones can be interpreted as the log-likelihood ratio for testing whether p-values are uniformly distributed on [0,1] vs. … alternate distribution’. Why should this be the test? P-values on clean, background data can be directly obtained and whatever that distribution can be empirically tested or uniformity can be tested against after a whitening transformation?
- Comparison with reasonable ‘baselines’ – for example, with Feinman et al. 2017? Comparison with current state of the art defenses on CIFAR10?
- Why is the BIM attack (Kurakin et al. 2016b) used? There are more powerful attacks available now?
- The NPSS scoring function is supposed to maximize over significance values \alpha. What range was used and how was it decided? In the discussion on Detection Power reported in Table 1, it is suggested that different \alpha_max will trade off precision vs recall. This has not been experimented with. |
iclr_2020_BJlbo6VtDH | Undirected neural sequence models such as BERT (Devlin et al., 2019) have received renewed interest due to their success on discriminative natural language understanding tasks such as question-answering and natural language inference. The problem of generating sequences directly from these models has received relatively little attention, in part because generating from such models departs significantly from the conventional approach of monotonic generation in directed sequence models. We investigate this problem by first proposing a generalized model of sequence generation that unifies decoding in directed and undirected models. The proposed framework models the process of generation rather than a resulting sequence, and under this framework, we derive various neural sequence models as special cases, such as autoregressive, semi-autoregressive, and refinement-based non-autoregressive models. This unification enables us to adapt decoding algorithms originally developed for directed sequence models to undirected models. We demonstrate this by evaluating various decoding strategies for a cross-lingual masked translation model (Lample and Conneau, 2019). Our experiments show that generation from undirected sequence models, under our framework, is competitive with the state of the art on WMT'14 English-German translation. We also demonstrate that the proposed approach enables constant-time translation with similar performance to linear-time translation from the same model by rescoring hypotheses with an autoregressive model. | This paper proposes a generalized framework for sequence generation that can be applied to both directed and undirected sequence models. The framework generates the final label sequence through generating a sequence of steps, where each step generates a coordinate sequence and an intermediate label sequence. This procedure is probabilistically modeled by length prediction, coordinate selection and symbol replacement. For inference, instead of the intractable naive approach based on Gibbs sampling to marginalize out all generation paths, the paper proposes a heuristic approach using length-conditioned beam search to generate the most likely final sequence. With the proposed framework, the paper shows that masked language models like BERT, even though they are undirected sequence models, can be used for sequence generation, which obtains close performance to the traditional left-to-right autoregressive models on the task of machine translation.
Overall the paper has significant contributions in the following aspects:
1. It enables undirected sequence models, like BERT, to perform decoding or sequence generation directly, instead of just serving as model pre-training.
2. The proposed framework unifies directed and undirected sequence models decoding, and it can represent a few existing sequence model decoding as special cases.
3. The coordinate sequence selection function in the framework can be dependent on the intermediate label sequence. A few simple selection approaches proposed in the paper are shown to be effective. It could be further extended.
4. The analysis of the coordinate selection order is interesting and helpful for understanding the algorithm.
5. The experiment results for decoding masked language models on machine translation are promising. It also provides the comparison to recent related work on non-autoregressive approaches.
The presentation of the paper is also clear. I am leaning towards accepting the paper.
However, there are some weaknesses:
1. It should be analyzed more why different coordinate selection approaches perform differently in linear-time decoding vs. constant-time decoding. Even in constant-time decoding, the conclusion varies in different decoding setting, easy-first is the worst for the L->1 case, but the best for the L/T case, why is that?
2. What is the motivation for "hard-first"?
3. The setting of "least2most" with L->1 is similar to Ghazvininejad et al. 2019. But Table 4 in the appendix shows the result in this paper is still worse (21.98 vs. 24.61, when both systems use 10 iterations without AR). Also, the gap from the AR baseline is larger than that in Ghazvininejad et al. 2019. Given the two systems are considered similar, it should be explained in the paper the possible reasons for these discrepancies in results.
Additional minor comments for improving the paper:
1. In the introduction, it mentions the baseline AR is (Vaswani et al. 2017), while in the experimental settings, it mentions the baseline AR is (Bahdanau et al. 2015). Please clarify which one is used.
2. In Table 1, how does T = 2L work for the "Uniform" case while the target sequence length is only T, since it is mentioned the positions are sampled without replacement. Similarly, how does T = 2L work for the "Left2Right" case? Is it just always choosing the last position when L < t <= 2L? In these two cases, it seems T > L is not needed.
3. In Table 3, the header for the 2nd column should be o_t, as defined in Section 4 - "Decoding scenarios". What is the actual value of K and K'' for the constant-time machine translation experiments in the paper?
4. "Rescoring adds minimal overhead as it is run in parallel" - it still needs to run left-to-right in sequence since it is auto-regressive. Please clarify what it means by "in parallel" here.
5. What is the range and average for the target sentence length? How is T = 20 for constant-time decoding compared to linear-time decoding in terms of speed? |
iclr_2020_H1ggKyrYwB | The knowledge that humans hold about a problem often extends far beyond a set of training data and output labels. While the success of deep learning mostly relies on supervised training, important properties cannot be inferred efficiently from end-to-end annotations alone, for example causal relations or domain-specific invariances. We present a general technique to supplement supervised training with prior knowledge expressed as relations between training instances. We illustrate the method on the task of visual question answering (VQA) to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions. Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing against the end-to-end objective. Our method uses these relations to shape the embedding space of the model, and treats them as strict constraints on its learned representations. In the context of VQA, this approach brings significant improvements in accuracy and robustness, in particular over the common practice of incorporating the constraints as a soft regularizer. We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used. It demonstrates the value of an additional training signal that is otherwise difficult to extract from end-to-end annotations alone. | The paper argues for encoding external knowledge in the (linguistic) embedding layer of a multimodal neural network, as a set of hard constraints. The domain that the method is applied to is VQA, with various relations on the questions translated into hard constraints on the embedding space. A technique which involves distillation is used to satisfy those constraints during learning.
The question of how to encode external knowledge in neural networks is a crucial one, and the limitations of end-to-end learning with supervised data is well-made. Overall I feel that this is a potentially interesting paper, addressing an important question in a novel way, but I found the current version a highly-frustrating read (and I read the paper carefully a number of times); in fact, so frustrating that it is hard for me to recommend acceptance in its current form. More detailed comments below.
Major comments
--
The main problem I have with the paper lies with the first part of section 3, which is a key section describing the main method by which the constraints are satisfied during learning. This is very confusing. The need for the two-step procedure, in particular, and the importance of distillation needs much more explanation, and not relegated to the Appendix (which reviewers are not required to read - see call for papers). I'm not suggesting that the whole of the appendix needs moving to the body of the paper, but I would suggest perhaps 1/2 a page.
A related comment is the use of the distillation technique. This looks crucial, but I don't believe distillation is mentioned at all until the end of the related work section, and even there it comes as a bit of a surprise since there's no mention anywhere of this technique in the introduction.
I would say a little more about the distinction between the embedding space and parameter space, since you say that the external knowledge is encoded in the former and not the latter, and this is important to the overall method. Since embeddings are typically learned (or at least fine-tuned) it's not clear where the boundary is here. Another comment is that embedding space in this paper means the linguistic embedding space. Since this is ICLR and not, eg, ACL, I would make clear what you mean by embedding space.
I don't understand the diagram in Fig. 3 of the architecture, nor the explanation. What's an operation here? Is it *, or *6? I don't get why 3 is embedded by itself in the diagram, and then combined with the remainder using the MLP. Why not just run the RNN over the sequence?
Why are the training instances {3,+1...} and {4,*2,...} equivalent. I stared at this a while, and still have no idea. Also, how are these "known to be equivalent" - what's the procedure?
Minor comments including typos etc.
--
The paper has the potential to be really nicely written and well-presented. Currently it reads like it was thrown together just before the deadline (which only adds to the overall frustration as a reader).
In fig. 1 the second equivalent question example is interesting, since strictly speaking "box" and "rectangular container" are not synonyms (e.g. boxes can be round). Since strict synonymy is hard to find, does that matter? (I realise the dataset already exists and was presented elsewhere, but this might be worth a footnote).
missing (additional) right bracket after Herbert (2016)
Not sure footnote 1 needs to be a footnote. It's already been said, I think, but if it does need repeating it probably deserves to be in the body of the text.
between pairs questions
see Fig.3 -> figure 2?
see Fig.1 -> Tab. 1? (on p.5)
footnote 1 missing a right bracket
usually involve -> involves
+9]) - extraneous bracket
Fig. 4.1 -> Fig. 3? (p.6)
p.7 wastes a lot of space. In order to bring some of the appendix into the main body, I would do away with the very large bulleted list. (I don't mean lose the content - just present it more efficiently)
Remember than
Finally in Fig. 4.2 - some other figure
due of the long chains |
iclr_2020_SJeWHlSYDB | For distributions p and q with different supports, the divergence D(p||q) may not exist. We define a spread divergenceD(p||q) on modified p and q and describe sufficient conditions for the existence of such a divergence. We demonstrate how to maximize the discriminatory power of a given divergence by parameterizing and learning the spread. We also give examples of using a spread divergence to train and improve implicit generative models, including linear models (Independent Components Analysis) and non-linear models (Deep Generative Networks). | The paper introduced a way to modify densities such that their support agrees and that the Kullback-Leibler divergence can be computed without diverging. Proof of concept of using the spread KL divergence to ICA and Deep Generative Models ($\delta$-VAE) are reported based on the study of spread MLE.
Comments:
In Sec 1, mention that f should be strictly convex at 1. Also mention
Jensen-Shannon divergence, a KL symmetrization, which is always finite
and used in GAN analysis.
In Sec 2, you can also choose to dilute the densities with a mixture:
(1-\epsilon)p+\epsilon noise.
Explain why spread is better than that? Does spreading introduce
spurious modes?, does it change distribution sufficiency?
(Fisher-Neymann thm)
In Formula 4, there is an error: missing denominator of \sigma^2. See
Appendix D too.
In footnote 4, page 8, missing a 1/2 factor in from of TV (that is upper
bounded by 1 and not 2)
KL is relative entropy= cross-entropy minus entropy. What about spread KL?
In general, what statistical properties are kept by using the spread?
(or its convolution subcase?)
Is spreading a trick that introduces a hyperparameter that can then be
optimized for retaining discriminatory power, or is there
some deeper statistical theory to motivate it. I think spread MLE should
be further explored and detailed to other scenarii.
Spreading can be done with convolution and in general by Eq.3:
Then what is the theoretical interpretation of doing non-convolutional
spreading?
A drawback is that optimization on the spread noise hyperparameter is
necessary (Fig 3b is indeed much better than Fig 3a).
Is there any first principles that can guide this optimization rather
than black-box optimization?
Overall, it is a nice work but further statistical guiding principles
or/and new ML applications of spread divergences/MLE will strengthen the
work.
The connection, if any, with Jensen-Shannon divergence shall be stated
and explored.
Minor comments:
In the abstract, state KL divergence instead of divergence because
Jensen-Shannon divergence exists always.
Typos:
p. 6 boumd->bound
Bibliography : Cramir->Cramer, and various upper cases missing (eg.
wasserstein ->Wasserstein) |
iclr_2020_BJeKwTNFvB | We propose a model that is able to perform unsupervised physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states. We address this problem through a physics-as-inverse-graphics approach that brings together vision-as-inverse-graphics and differentiable physics engines, enabling objects and explicit state and velocity representations to be discovered. This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control. Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems), due to its ability to build dynamics into the model as an inductive bias. We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system. We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation. | This paper presents an approach for unsupervised estimation of physical parameters from video, using the physics as inverse graphics approach. The approach uses a feedforward encoder for localising object positions from which a velocity is estimated. These are fed as inputs to an (action-conditioned) physics simulator which generates future predictions of object positions. This simulator has knowledge of the system dynamics apriori, needing estimation of a few physical parameters such as gravity and spring constants. The outputs of this simulator is fed to a co-ordinate consistent decoder, a neural network that uses a Spatial Transformer to render the corresponding output image. The whole system is trained end-to-end on videos of dynamical systems, in an unsupervised manner. Results on two and three body interaction settings and an MNIST digit motion dataset show promising performance. The system is able to recover the underlying physical parameters accurately while also making consistent long-term ex predictions. Additionally, the model is used for visual MPC on a simulated cartpole task where it outperforms state of the art model-based and model-free RL baselines.
This paper is well written and clearly motivated, albeit a bit incremental in its approach. Many of the building blocks have been explored in prior work, with the major component being the co-ordinate consistent decoder. The experiments are visually simplistic and it is not obvious if the system will scale to more complex settings. A few comments:
1. A major limitation of the approach is the assumption that the equations governing the system are known. This makes it harder to generalise the system to novel tasks and more complicated settings with contact and object interactions. A potential way to overcome this could be to use ideas from prior work such as Interaction Networks where the dynamics are modelled as unary and binary interactions. While such dynamics models are black-box and not as interpretable compared to the current approach, they can easily generalise to novel tasks. Additionally, using positions and velocities as the latent state representation together with IN style transition models can be a sensible middle ground.
2. It would be great if there is an additional ablation experiment where the known equations of motion are replaced with a black-box neural network (while still retaining the position and velocity representation). This can quantify the effect of known dynamics and make the contributions of the paper (with regards to the decoder) more clear.
3. An alternative way of generating consistent object positions from the encoder is to compute a mask-weighted average of the image co-ordinates. This can be a nice way of adding additional structure to the networks that can regularise training.
4. The approach uses a 3-layer MLP for generating velocity estimates — could this not be done via finite differencing? (e.g. higher-order backward differencing)
5. Both the content and mask vectors are learnable but fixed for the entire task — i.e. it is not a function of the input image. This makes the approach not applicable to novel objects or even objects with minor color changes.
6. It is not clear how the translations, rotations and scale parameters for the Spatial Transformer are estimated. I presume that the positions and orientations predicted by either the encoder or physics simulator are directly used. This needs to be clarified in the main paper.
7. The paper mentions that the background masks are known when localising the objects via the encoder. If this is the case, the localisation problem becomes somewhat trivial. This should be clarified.
8. There needs to be a clear discussion on the limitations of the current approach — it does not scale to novel objects, needs to know the number of objects apriori and has not been shown to estimate object properties such as mass, friction etc.
Overall, the approach presents promising initial results towards an unsupervised method for modelling dynamical systems from video. There are several limitations that need to be made explicit and some additional experiments on more complicated systems and a few ablation studies can significantly improve the strength of the paper. I would suggest a borderline accept.
Typos:
1. Section 4.1, Setup: 5 values of (K, t_pred, t_ext) are given, need only 4
2. “K” is not introduced till the results section
Additional citation:
The following paper on learning physically consistent position and velocity representations for dynamical systems (and for use in visual MPC settings) should be cited:
Jonschkowski, Rico, et al. "Pves: Position-velocity encoders for unsupervised learning of structured state representations." arXiv preprint arXiv:1705.09805 (2017). |
iclr_2020_HkgsWxrtPB | We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods. | This paper proposes a new meta-reinforcement learning algorithm, MSGI, which focuses on the problem of adapting to unseen hierarchical tasks through interaction with the environment where the external reward is sparse. The authors make use of subtask graph inference to infer the latent subtask representation of a task through interacting with the environment using an adaptation policy and then optimize the adaptation policy based on the inferred latent subtask structure. Each task in the paper is represented as a tuple of subtask precondition and subtask reward, which are inferred via logic induction and MLE of Gaussians respectively. At meta-test time, MSGI rollouts a subtask graph execution (SGE) policy based on the graph inferred from the interactions between the environment and the adaptation policy. The authors also propose a UCB-inspired intrinsic reward to encourage exploration when optimizing the adaptation policy. Experiments are conducted on two grid-world domains as well as StarCraft II.
Overall, this paper is mainly an extension of the prior work [1], which uses a subtask graph for tackling hierarchical RL problems. This work builds upon [1] by extending to meta-learning domains and studying generalization to new hierarchical tasks. While the contribution seems a bit incremental and the experimental setting is a bit unclear and limited to low-dimensional state space, the inference of task-specific subtask graphs based on past experiences and the proposal of a UCB-inspired reward shed some interesting insights on how to approach meta-hierarchical RL where long-horizon tasks and sparse rewards have been major challenges. Given some clarification on the experimental setup and additional results on more challenging domains in the author's response, I would be willing to improve my score.
Regarding the experimental setup, the set of subtasks is a Cartesian product of the set of primitive actions and a set of all types of interactive objects in the domain, while the state is represented as a binary 3-dimensional tensor indicating the position of each type of objects. Such a setup seems a bit contrived and is limited to low-dimensional state space and discrete action space, which makes me doubt its scalability to high-dimensional continuous control tasks. It would be interesting to see how/if MSGI can perform in widely used meta-RL benchmarks in Mujoco. I also wonder how MSGI can be compared to newly proposed context-based meta-RL methods such as PEARL.
As for the results, the authors don't provide an ablation study on the UCB exploration bonus though they claim they would show it in the paper. Moreover, the result of GRProp+Oracle is also missing in the comparison. I also don't understand why MSGI-Meta and RL2 would overfit in the SC2LE case and are unable to adapt to new tasks. Is that a limitation of the method? The authors also introduce MSGI-GRProp in this setting, which is never discussed before, and claim that MSGI-GRProp can successfully generalize to new tasks. It seems that the authors don't use a meta-RL agent in order to get this domain to work. I believe more discussion on this part is needed.
[1] Sungryull Sohn, Junhyuk Oh, and Honglak Lee. Hierarchical reinforcement learning for zero-shot
generalization with subtask dependencies. In NeurIPS, pp. 7156–7166, 2018. |
iclr_2020_HJg_ECEKDr | This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algorithms are possible via Generative Teaching Networks (GTNs), a general approach that is, in theory, applicable to supervised, unsupervised, and reinforcement learning, although our experiments only focus on the supervised case. GTNs are deep neural networks that generate data and/or training environments that a learner (e.g. a freshly initialized neural network) trains on for a few SGD steps before being tested on a target task. We then differentiate through the entire learning process via meta-gradients to update the GTN parameters to improve performance on the target task. GTNs have the beneficial property that they can theoretically generate any type of data or training environment, making their potential impact large. This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning. We also demonstrate a practical and exciting application of GTNs: accelerating the evaluation of candidate architectures for neural architecture search (NAS), which is rate-limited by such evaluations, enabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art, finding higher performing architectures when controlling for the search proposal mechanism. GTN-NAS also is competitive with the overall state of the art approaches, which achieve top performance while using orders of magnitude less computation than typical NAS methods. Speculating forward, GTNs may represent a first step toward the ambitious goal of algorithms that generate their own training data and, in doing so, open a variety of interesting new research questions and directions. | This paper proposes a meta-learning algorithm Generative Teaching Networks (GTN) to generate fake training data for models to learn more accurate models. In the inner loop, a generator produces training data and the learner takes gradient steps on this data. In the outer loop, the parameters of the generator are updated by evaluating the learner on real data and differentiating through the gradient steps of the inner loop. The main claim is this method is shown to give improvements in performance on supervised learning for MNIST and CIFAR10. They also suggest weight normalization patches up instability issues with meta-learning and evaluate this in the supervised learning setting, and curriculum learning for GTNs.
To me, the main claim is very surprising and counter-intuitive - it is not clear where the extra juice is coming from, as the algorithm does not assume any extra information. The actual results I believe do not bear out this claim because the actual results on MNIST and CIFAR10 are significantly below state of the art. On MNIST, GTN achieves about 98% accuracy and the baseline “Real Data” achieves <97% accuracy, while the state of the art is about 99.7% and well-tuned convnets without any pre-processing or fancy extras achieve about 99% according to Yann LeCunn’s website. The disparity on CIFAR seems to be less egregious but the state of the art stands at 99% while the best GTN model (without cutout) achieves about 96.2% which matches good convnets and is slightly worse than neural architecture search according to https://paperswithcode.com/sota/image-classification-on-cifar-10.
This does not negate the potential of GTNs which I feel are an interesting approach, but I believe the paper should be more straightforward with the presentation of these results. The current results basically show that GTNs improve the performance of learners with bad hyper-parameters. On problems that are not as well-studied as MNIST or CIFAR10 this could still be very valuable (as we do not know what performance is good or bad in advance). Based on the results, GTN does seem to be a significant step forward in synthetic data generation for learning compared to prior work (Zhang 2018, Luo 2018).
The paper proposes two other contributions: using weight normalization for meta-learning and curriculum learning for GTNs. Weight normalization is shown to stabilize GTNs on MNIST. I think the paper oversteps in the relevant method section, hypothesizing it may stabilize meta-learning more broadly. The paper should present a wider set of experiments to make this claim convincing. But the point for GTNs on MNIST nevertheless stands. For curriculum learning: the description of the method is done across section 2 and section 3.2 and does not really describe it completely. How exactly are the samples chosen in GTN - All Shuffled? How does GTN - Full Curriculum and Shuffled Batch parametrize the order of the samples so that it can be learned? I suggest that this information is all included as a subsection in the method (section 2). The results seem to show the learned curriculum is superior to no curriculum.
At a high level it would be very surprising to me if the way forward for better discriminative models was to learn good generative models and use them again for training discriminative models, simply because discriminative models have proved thus far significantly easier to train. If this work does eventually show this result, it would be a very interesting result. At the moment, I believe it does not, but I would be happy to change my mind if the authors provide convincing evidence. Alternatively, I feel that the paper could be a valuable contribution to the community if the writing is toned down to focus on the contributions, presents the results comparing to well-tuned hyperparameters and not over-claim.
More comments:
What is the outer loop loss function? Is it assumed to be the same as the inner one (but using real data instead of training data)? I think this should be made explicit in the method section.
There are some additional experiments in other settings such as RL and unsupervised learning. Both seem like quite interesting directions but seem like preliminary experiments that don’t work convincingly yet. The RL experiment shows that using GTN does not change performance much. There is a claim about optimizing randomly initialized networks at each step, but the baseline which uses randomly initialized networks at each step with A2C is missing. The GAN experiments shows the GAN loss makes GTN realistic (as expected) but there are no quantitative results on mode collapse. (Another interesting experiment would be to show how adding a GAN loss for generating data affects the test performance of the method.) Perhaps it would benefit the paper to narrow in on supervised learning? Given that these final experiments are not polished, the claim in the abstract that the method is “a general approach that is applicable to supervised, unsupervised, and reinforcement learning” seems to be over-claiming. I understand it can be applicable but the paper has not really done the work to show this outside the supervised learning setting.
Minor comments:
Pg. 4: comperable -> comparable |
iclr_2020_rygtPhVtDS | Modelling statistical relationships beyond the conditional mean is crucial in many settings. Conditional density estimation (CDE) aims to learn the full conditional probability density from data. Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum likelihood objective. Due to the inherent structure of such models, classical regularization approaches in the parameter space are rendered ineffective. To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training. We demonstrate that the proposed approach corresponds to a smoothness regularization and prove its asymptotic consistency. In our experiments, noise regularization significantly and consistently outperforms other regularization methods across seven data sets and three CDE models. The effectiveness of noise regularization makes neural network based CDE the preferable method over previous non-and semi-parametric approaches, even when training data is scarce. | The paper considers the problem of parametric conditional density estimation, i.e. given a set of points {(x_n, y_n)} drawn from a distribution $p(x,y)$, the task is to estimate the conditional distribution p(x|y). The paper considers parametric estimation where in given a parametrized family of distributions f_{theta} we wish to minimize the likelihood of seeing the given data over theta. The parametric family in a lot of applications consists of highly expressive families like neural networks, which leads to the issue of overfitting in small data regimes. This has been tackled via regularization over the parameter space which might be hard to interpret as the associated inductive bias is not well understood and depends on the parametric family under consideration. On the other hand the paper proposes to add explicit noise in the examples used during training, i.e. irrespective of the optimization procedure (which could be mini-bath sgd) the paper proposes to draw examples from the data set, explicitly add noise onto the examples and create a proxy objective over the augmented data set.
The paper establishes two theoretical results. First is a simple taylor approximation based analysis to highlight the effect of the variance of noise. The conclusion is that higher variance penalizes the high curvature areas of the resultant density and hence this kind of noise addition could be seen as making resulting density smoother. The second contribution is to show that this procedure is asymptotically consistent, i.e. as n goes to infinity and the number of augmented data points go to infinity, the resulting density converges to the target density. This of course requires the noise variance to follow a decreasing schedule to 0.
The main merit of the idea is in its agnostic nature, as it can be applied to any parametric family and the experiements show that it seems to uniform improvement across models, The basic idea proposed by the error, has existed in the space of deep learning based methods forever. This is the same idea behind image augmentation which forms a crucial part of training supervised models in vision. The authors claim that this idea is novel in the space of parametric density estimation, however I do not know enough about the area to verify the claim. It would surprise that this very natural idea has not been tried before.
I have gone through the theoretical derivations in the paper and they look sound to me. However the results are all asymptotic in nature without establishing explicit rates which is a little bit of disappointment. Since I am not completely familiar in nature, but I guess such asymptotic consistency might be achievable using other forms of regularization under suitable assumptions. In that light, the theoretical contributions while being sound did not lend much intuition about why such a method might outperform others. Intuition does arise from the derivation for the effect of noise on the objective which helps understand the nature of the noise, but one wonders if similar intuitions could be derived for other forms of regularization as well. It would be great to see this derivation being extended to some concrete scenarios under well understood parametric families and seeing the effect explicitly.
Regarding the experiments - The experiments definitely look promising as the improvments seem uniformly good across the cases considered. I am not an expert however in this setting so it is hard for me to judge the quality and significance of the benchmarks. The experiment methodology nevertheless looks sound. |
iclr_2020_ryl0cAVtPH | We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems. | This paper examines the problem of warm-starting the training of neural networks. In particular, a generalization gap arises when the network is trained on the full training set from the start versus being warm-started, where the network is initially (partially) trained on a subset of the training set, then switched to the full training set. This problem is practical, as it is often preferable to train online while data is collected to make up-to-date predictions for tasks (such as in online advertising or recommendation systems), but it has been found that retraining is necessary in order to obtain optimal performance. The paper also mentions active learning, domain shift, and transfer learning as two other relevant and important problems.
The paper attempts to investigate this phenomena from a few different avenues: (1) simple experiments segmenting the training set into two different subsets, then training to completion or partial training on the first subset before switching to training on the full set; (2) looking at the gradients of warm-started models; (3) adding regularization; (4) warm-starting all layers, then training only last layer; (5) perturbing the warm-started parameters.
Strengths:
I believe very strongly in the practical impact of the problems presented in this paper. These indeed are challenging problems that are relevant to industry that have not been given sufficient attention in the academic literature. I appreciate the initial experimentation on this subject, and the clear demonstration of the problem through simple experiments. The paper is also well-written.
Weaknesses:
Some questions I had include:
- Why is the Pearson correlation between parameters of the neural network a good way of measuring the correlation to their initialization?
- Why is it surprising that the magnitude of the gradients of the "new" data is higher than at a random initialization?
- Why does this phenomena occur even though the data is sampled from the same distribution?
- Does this work have any relationship with work on generalization such as:
[1] Recht, Benjamin, et al. "Do CIFAR-10 classifiers generalize to CIFAR-10?." arXiv preprint arXiv:1806.00451 (2018).
[2] Recht, Benjamin, et al. "Do ImageNet Classifiers Generalize to ImageNet?." arXiv preprint arXiv:1902.10811 (2019).
etc.
Although I like the topic of this paper, the investigation seems too preliminary at this point. There is no clear hypothesis towards answering the problems proposed in the paper. There is also no analysis, which places the burden on the numerical experiments to demonstrate something interesting, and the experiments seem sparse and small-scale. For these reasons, I am inclined to reject this paper at this time, but I strongly encourage further exploration into the topic.
Some potential questions or directions could include:
1. What if only a single epoch of training is used on 50% of the data? Does the gap appear in that setting? I ask because one would expect that a single epoch of training on 50% of the data, then training on new data would be equivalent to training on the full dataset from the start.
2. How does this gap change with respect to the (relative) amount of new data introduced into the problem? For example, if one were to only add a single datapoint to the training set, would one still observe this behavior? Could one potentially add data more incrementally (rather than half of the training set) and potentially mitigate this drastic change in the problem?
3. There are optimization algorithms specifically designed for stochastic optimization (with a fixed distribution) versus for online optimization (online gradient, Adagrad). Is the online optimization framework perhaps more "realistic" than the stochastic optimization framework in these streaming/warm-starting settings? |
iclr_2020_rygf-kSYwH | This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source github.com/anon/bsuite, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers. | This paper presents the « Behavior Suite for Reinforcement Learning » (bsuite), which is a set of RL tasks (called « experiments ») meant to evaluate an algorithm’s ability to solve various key challenges in RL. Importantly, these experiments are designed to run fast enough that one can benchmark a new algorithm within a reasonable amount of time (and money). They can thus be seen as a « test suite » for RL, limited to small toy problems but very useful to efficiently debug RL algorithms and get an overview of some of their key properties. The paper describes the motivation behind bsuite, shows detailed results from some classical RL algorithms on a couple of experiments, and gives a high-level overview of how the code is structured.
I really believe such a suite of RL tasks can indeed be extremely useful to RL researchers developing new algorithms, and as a result I would like to encourage this initiative and see it published at ICLR to help it gain additional traction within the RL community.
The paper is easy to read, motivates well the reasons behind bsuite, and shows some convincing examples. However, in my opinion there remain a few important issues with this submission:
1. There is no « related work » section to position bsuite within the landscape of RL benchmarks (ex: DMLab, ALE / MinAtar, MuJoCo tasks, etc.). I believe it is important to add one.
2. The current collection of experiments appears to be quite limited. The authors acknowledge the lack of hierarchical RL, but what about other aspects like continuous control, parameterized actions, multi-agent, state representation learning, continual learning, transfer learning, imitation learning / inverse RL, self-play, etc? It is unclear to me whether the goal is to grow bsuite in all these directions (and more) over time, or if there is some kind of « boundary » the authors have in mind regarding the scope of bsuite. Regardless, the fact is that in its current form, bsuite appears to be suited only to a limited subset of current RL research.
3. I wish an anonymized version of the code had been provided, so that reviewers could test it. In particular I wonder (a) if it is easy to setup and run under Windows, and (b) if it is straighforward to plug a bsuite experiment within an algorithm based on the popular OpenAI gym API (I think the latter is true from what is said at the end of Section 4, but I would have appreciated being able to try it out myself).
Additional minor remarks:
• I noticed two anoymity-related issues with the provided links: (1) the Google Colab notebook revealed to me the name of its author when clicking the « Open in Playground » link to be able to run it, and (2) the bsuite-tutorial link asks for permission, which might let the authors access reviewer info. I would not hold it against the authors though as I believe these are genuine mistakes and they did their best to preserve anonymity.
• t > 2 in Section 2.1 should probably be t >= 2
• In FIg. 2b the label for the y axis seems incorrect since good results are near 0
• Please explain what is the dashed grey line in Fig. 4b
• I was unable to understand the last 2 sentences of Section 4
• Sections C.2, D.2 and E.2 all have the same plots
• A few typos: incomplete sentence near bottom of p.3 (« the internal workings… »), « These assessment », « expeirments », « recurrant », « length ? 1 », « together with an analysis parses this data », « anonimize », « bsuite environments by implementing », « even if require »
Review update: the authors have addressed my concerns, and I look forward to using bsuite in my research => review score increased to "Accept" |
iclr_2020_rkxEKp4Fwr | Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's optimization. Modifying the training distribution in a way that excludes such samples could provide an effective solution to both improve performance and reduce training time. In this paper, we propose to scale up ensemble Active Learning methods to perform acquisition at a large scale (10k to 500k samples at a time). We do this with ensembles of hundreds of models, obtained at a minimal computational cost by reusing intermediate training checkpoints. This allows us to automatically and efficiently perform a training data subset search for large labeled datasets. We observe that our approach obtains favorable subsets of training data, which can be used to train more accurate DNNs than training with the entire dataset. We perform an extensive experimental study of this phenomenon on three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet), analyzing the impact of initialization schemes, acquisition functions and ensemble configurations. We demonstrate that data subsets identified with a lightweight ResNet-18 ensemble remain effective when used to train deep models like ResNet-101 and DenseNet-121. Our results provide strong empirical evidence that optimizing the training data distribution can provide significant benefits on large scale vision tasks. | Review Summary
--------------
Overall, I'm not quite convinced this method would be worth the trouble to implement. On the more realistic benchmarks, they need to keep ~80% of the total dataset size and the claimed "improvement" is rather small (less than 0.6% absolute gain in accuracy, e.g. from 81.86% to 82.37% on CIFAR100 and from 72.33% to 72.78% on ImageNet). There is no runtime comparison, there are missing baselines, and most of the method development seems guided by trying out many options instead of taking a principled approach. Without these, the paper is just not ready for a top conference like ICLR.
Paper Summary
-------------
The paper considers a new take on active learning for image classification: given a large fully labeled dataset, identify a subset of the data that, when training on that subset alone, yields similar performance as training on the (much larger) full dataset. The paper focuses on "ensembles" of deep neural network classifiers as the prediction model, following Lakshminarayanan et al. (2017).
The presented method is summarized in Algorithm 1. Given a suitably initialized "acquisition model", the model makes predictions on each example in the full dataset, then ranks examples using an acquisition function to find the subset of size N_s (top N_s examples by rank) where there is most "disagreement" among the model ensemble. This subset is then used to train a "subset" model (again, an ensemble of DNNs).
Experiments consider several possible initializations, acquisition functions, and ensemble sizes. Evaluation is done using the validation sets of three prominent image classification benchmarks: CIFAR10, CIFAR100, and ImageNet (1000 classes).
Significance
------------
I don't think a successful case has been made that the proposed solution would generate significant widespread interest, because the gains demonstrated here are too minimal. Looking at the primary results in Table 2, it's really only when using 80% of the total images of imagenet or cifar100 (the most realistic benchmarks) that there is a small (<1%) absolute gain in accuracy over the simpler approach of just using the full dataset. Thus, the approach is not going to significantly reduce computational burden but adds a lot of complexity.
Novelty
-----------
The method seems new to me.
Experimental Concerns
---------------------
## E1: Need to consider runtime in evaluation
None of the figures/tables that I can see report elapsed runtimes for the different methods. To me this is the fundamental tradeoff: not how many fewer examples can I learn from, but how much faster is the method than the "standard" of using the full dataset? Showing curves of validation progress over wallclock time would be a better way to present results.
The important thing here is that even *full dataset* makes progress after each minibatch. You'd need to show progress at checkpoints for each epoch in 0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, ....
## E2: Potential missing baseline: Random subset with balanced classes
When you select a random subset, are you ensuring class balance? If not, that seems like a more refined baseline. Perhaps won't make too much difference for cifar10, but could be important for ensuring rarer classes in ImageNet are represented well.
Presentation Concerns
---------------------
## P1: Initialization description confusing
I didn't understand the "build up" method as described in Sec. 3. How large is the subset used at each phase? How do you know when to stop? This could use a rewrite to improve clarity.
## P2: Missing some details for reproducibility
How was convergence assessed for all models? How were learning rates set? Many of these are crucial to understanding the runtime required for different models. (Sorry if these are in the appendix, but some short summary is needed in the main paper)
## P3: Title Change Recommended
I don't think the presented method is really doing a "Distribution Search"... I would suggest "Training Data Subset Selection with Ensemble Active Learning"
Minor Method Concerns
---------------------
## M1: What about regression?
Acquisition functions seem specialized to classification. What to do for regression or structure learning? Any general principles to recommend? |
iclr_2020_rygEokBKPS | We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from ∞ -white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to 10, 000 queries, results up to 99.2% of success rate against InceptionV3 classifier with 630 queries to the network on average in the untargeted attacks setting, which is an improvement by 90 queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of 100, 000, 100% of success rate with a budget of 6, 662 queries on average, i.e. we need 800 queries less than the current state of the art. | This paper proposed a new query efficient black-box attack algorithm using better evolution strategies. The authors also add tiling trick to make the attack even more efficient. The experimental results show that the proposed method achieves state-of-the-art attack efficiency in black-box setting.
The paper indeed presented slightly better results than the current state-of-the-art black-box attacks. It is clearly written and easy to follow, however, the paper itself does not bring much insightful information.
The major components of the proposed method are two things: using better evolution strategies and using tiling trick. The tiling trick is not something new, it is introduced in (Ilyas et al., 2018) and also discussed in (Moon et al., 2019). The authors further empirically studied the best choice of tiling size. I appreciated that, but will not count it as a major contribution. In terms of better evolution strategies, the authors show that (1+1) and CMA-EA can achieve better attack result but it lacks intuition/explanations why these helps, what is the difference. It would be best if the authors could provide some theories to show the advantages of the proposed method, if not, at least the authors should give more intuition/explanation/demonstrative experiments to show the advantages.
Detailed comments:
- In section 3.2, is the form of the discretized problem a standard way to transform from continuous to discrete one? What is the intuition of using a and b? Have you considered using only one variable to do it?
- In section 3.3.2 what do you mean by “with or without softmax, the optimum is at infinity”? I hope the authors could further explain it.
- In eq (2), do you mean max_{\tau} L(f(x + \epsilon tanh(\tau)), y) ?
- In section 3.3.1, the authors said (1+1)-ES and CMA-ES can be seen as an instantiation of NES. Can the authors further elaborate on this?
- Can the authors provide algorithm for DiagonalCMA?
- It is better to put the evolution strategy algorithms in the main paper and discuss it.
- Can the authors also comment/compare the results with the following relevant paper?
Li, Yandong, et al. "NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks." ICML 2019.
Chen, Jinghui, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." arXiv preprint arXiv:1811.10828 (2018).
- In Table 1, why for Parsimonious and Bandit methods, # of tiles parts are missing? I think both of the baselines use tilting trick? And they should also run using the optimal tiling size? The result seems directly copied from the Parsimonious paper? It makes more sense to rerun it in your setting and environment cause the sampled data points may not be the same. Since CMA costs significantly more time, it makes a fair comparison to also report the attack time needed for each method.
- In Table 3, why did not compare with Bandit and Parsimonious attacks?
======================
after the rebuttal
I thank the authors for their response but I still feel that there is a lot more to improve for this paper in terms of intuition and experiments. Therefore I decided to keep my score unchanged. |
iclr_2020_BJlAzTEKwS | In reinforcement learning, robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions. One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space (a form of exploration), yet learning a population with the "best" coverage is still an open problem. In this work, we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies. Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work, including Soft-Actor Critic (SAC). | RL in environments with deceptive rewards can produce sub-optimal policies. To remedy this, the paper proposes a method for population-based exploration. Multiple actors, each parameterized with policies based on Normalizing Flows (radial contractions), are optimized over iterations using the off-policy SAC algorithm. To encourage diverse-exploration as well as high-performance, the SAC-policy-gradient is supplemented with gradient of an “attraction” or “repulsion” term, as defined using the KL-divergence of current policy to another policy from an online archive. When applying the KL-gradient, the authors find it crucial to only update the flow layers, and not the base Gaussian policy.
I have 2 issues with this paper:
1. Lack of novelty – Most of the components of the proposed algorithm have been researched in other works. SAC with NF was contributed by Mazoure et. al., and this paper seems to use the exact same architecture. The diversity-enforcing objective (Equation 1.) was proposed by Hong et. al., albeit not with a separate archive of policies, but using the experience replay (for off-policy) and recent policies (for on-policy). The objective is Section 3.2 is also the same as that in Hong et. al.
2. I have major concerns about how the results have been reported:
a. The authors have omitted SAC and SAC-NF from Figure 4, and therefore the figure gives the impression that ARAC is significantly better than the presented baselines in some of the environments. For example, take Humanoid-v2. The original SAC paper reports reaching close to 5k score in about 0.5 million timesteps. It therefore seems that most of the benefit of ARAC (over the presented baselines) comes just from using SAC. Another example is Humanoid-sparse, where SAC-NF achieves ~550 (from Table 1), but is not shown in the figure, making ARAC (score ~600) look awesome. The authors also incorrectly mention in text that “ARAC is the only method that can achieve non-zero performance” on this.
b. Table 1 reports “maximum” average return. This is very misleading and non-standard in RL. Take Humanoid-sparse for instance. The number reported for ARAC is 816 which is the peak performance during training. As we see in Figure 4, ARAC is unstable and the perf. drops to ~600 at end of training. The peak performance during training is an irrelevant metric.
c. Humanoid-rllab range on y-axis looks incorrect
Minor points:
For creating a diverse archive, I’m not sure if k-means on the episodic-returns is the most effective mechanism. As explored in Conti et al., and also mentioned in the introduction of this paper, behavioral-diversity is a more useful characterization, and episodic-returns may not be aligned to that (especially in sparse reward environments).
Some of the missing related work (on population-based diversity): DIAYN, Learning Self-Imitating Diverse Policies, Divide and Conquer RL. |
iclr_2020_B1eP504YDr | Most of existing advantage function estimation methods in reinforcement learning suffer from the problem of high variance, which scales unfavorably with the time horizon. To address this challenge, we propose to identify the independence property between current action and future states in environments, which can be further leveraged to effectively reduce the variance of the advantage estimation. In particular, the recognized independence property can be naturally utilized to construct a novel importance sampling advantage estimator with close-to-zero variance even when the Monte-Carlo return signal yields a large variance. To further remove the risk of the high variance introduced by the new estimator, we combine it with existing Monte-Carlo estimator via a reward decomposition model learned by minimizing the estimation variance. Experiments demonstrate that our method achieves higher sample efficiency compared with existing advantage estimation methods in complex environments. | This paper proposes a novel advantage estimate for reinforcement learning based on estimating the extent to which past actions impact the current state. More precisely, the authors train a classifier to predict the action taken k-steps ago from the state at time t, the state at t-k and the time gap k. The idea is that when it is not possible to accurately predict the action, the action choice had no impact on the current state, and thus should not be assigned credit for the current reward, they refer to this as the "independence property" between current action and future states. Based on this idea, the authors introduce a "dependency factor", using the ratio P(s_{t+k},a_{t+k}|s_t,a_t)/P(s_{t+k},a_{t+k}|s_t). They later show that this can be reworked using Bayes theorem into a ratio of the form P(a_t|s_t,s_{t+k})/\pi(a_t|s_t) which is more convenient to estimate. The authors show mathematically that, when the dependency factor is computed with the true probabilities and use to weight each reward in a trajectory, the result is an unbiased advantage estimator. Importantly the expectation, in this case, is taken over trajectories sampled according to the policy pi conditioned only on S_t. This is distinct from the Monte-Carlo estimator which is based only on samples in which A_t, the action whose advantage is being estimated, was selected.
They go on to say that this estimator will tend to have lower variance than the conventional Monte-Carlo estimator when future rewards are independent of current actions. However, the variance can actually be higher, due to the importance sampling ratio used, when future rewards are highly dependent on the current action. They propose a method to combine the two estimators on a per reward while maintaining unbiasedness using a control variate style decomposition. This introduces a tunable reward decomposition parameter which determines how to allocate each reward between the two estimators. The authors propose a method to tune this parameter by approximately optimizing an upper bound on the variance of the combined estimator.
As a final contribution, the authors introduce a temporal-difference method of estimating the action probability P(a_t|s_t,s_{t+k}) required by their method.
In the experiments, the authors provide empirical evidence that various aspects of their proposed method can work as suggested on simple problems. They also provide a simple demonstration where their advantage estimator is shown to improve sample efficiency in a control problem.
This paper suffers from moderate clarity issues, but I lean toward acceptance primarily because I feel that the central idea is a solid contribution. The idea of improving credit assignment by explicitly estimating how much actions impact future states seems reasonable and interesting. The temporal difference method introduced for estimating P(a_t|s_t,s_{t+k}) is also interesting and clever. I'm less confident in the introduced method for trading off between the Monte Carlo and importance sampling estimators. The experiments seem reasonably well executed and do a fair job of highlighting different aspects of the proposed method.
The derivation of the combined estimator was very confusing to me. It's strange that the derivation of the variance lower bound includes terms which are drawn from both a state conditional and state-action conditional trajectory. You're effectively summing variances computed with respect to two different measures, but the quantity being bounded is referred to as just the "variance of the advantage estimator". What measure is this variance supposed to be computed with respect to? Especially given that as written the two estimators rely on samples drawn from two different measures. It doesn't help that the advantage estimator whose variance is being constructed is never explicitly defined but just referred to as "advantage estimator derived from equation 3". Nevertheless, if we ignore the details of what exactly it is a lower bound of, the sum of the three variances in equation 5 seems a reasonable surrogate to minimize.
Related to the above point I don't fully understand what the variances shown in table 1 mean in the experiments section. For the IAE estimator for example, is the variance computed based on each sample using three independent trajectories (one for each term) or is it computed from single trajectories? If it's from single trajectories I can't understand how the expression would be computed.
Questions for the authors:
-Could you please explicitly define the "advantage estimator derived from equation 3"?
-Could you please explain precisely how the variance is computed in table 1?
Update:
Having read the other reviews and authors response I will maintain my score of a weak accept, though given more granularity I would raise my score to a 7 after the clarifications. I appreciate the authors' clarification of the advantage estimator and feel the related changes to the paper are very helpful. I still feel the central idea of the work is quite strong.
However, I also feel the control variate part of the work is very loosely specified. In particular, given the use of function approximation in practice instead of actually sampling 3 trajectories the validity of the control variate method applied is questionable. As the authors say "if the random variable in either term has high variance, function approximators will tend to have large error", This may be true initially but the function approximator can already reduce variance over time by learning, so it's not clear how the function approximators and control variate complement each other. This is something I feel would be worthwhile to explore more in future work.
Also, I feel it's worth pointing out that a concurrent paper presenting a very similar idea is scheduled to be presented at NeurIPS 2020, which can be found here: https://papers.nips.cc/paper/9413-hindsight-credit-assignment. I don't feel this in any way undermines the contribution of the work presented here, but merely wanted to make the meta reviewer aware in case it was relevant to their decision. In fact, I feel this work complements that one in a number of ways, including the presentation of the temporal difference method for learning the action probabilities. |
iclr_2020_SJgBra4YDS | Deep image prior (DIP) (Ulyanov et al., 2018), which utilizes a deep convolutional network (ConvNet) structure itself as an image prior, has attracted huge attentions in computer vision community. It empirically shows the effectiveness of ConvNet structure for various image restoration applications. However, why the DIP works so well is still unknown, and why convolution operation is useful for image reconstruction or enhancement is not very clear. In this study, we tackle these questions. The proposed approach is dividing the convolution into "delay-embedding" and "transformation (i.e., encoder-decoder)", and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity. The proposed method named as manifold modeling in embedded space (MMES) is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform. In spite of its simplicity, the image/tensor completion, super-resolution, and deconvolution results of MMES are quite similar even competitive to DIP in our extensive experiments, and these results would help us for reinterpreting/characterizing the DIP from a perspective of "low-dimensional patch-manifold prior". | In this paper, the authors present a natural image model based on the manifold of image patches. It is similar to the Deep Image Prior in that it is untrained and has a convolutional-like structure. It leads to an optimization problem with a reconstruction loss term and an auto encoding term. The authors show empirical results in time series recovery, non-semantic inpainting, and super resolution. In the image processing tasks, the performance of the proposed algorithm is on par (sometimes slightly worse, sometimes slightly better) than that of DIP.
I think the perspective of image patch analysis is a useful addition to the knowledge base for unlearned image priors. That said, the paper says it tackles the questions for why the DIP "works so well" and why convolution operations are "essential for image reconstruction or enhancement". After reading the paper, it is unclear to me how this work addresses these questions. In particular, demonstrating a similar convolutional system does not rule out the possibility that there are non-convolutional systems that also explain the effect. For example, the Deep Geometric Prior paper is nonconvolutional (it is entirely a MLP), and it also has the effect of fitting a smooth signal without training (subject to early stopping). The DGP could be applied to images as well, resulting in a nonconvolutional deep image prior. I think the authors should address more clearly and thoroughly the logical connection between their results and the explanation of the DIP, especially in light of the DGP.
The paper claims that the proposed method is more interpretable, and it would be nice if they could demonstrate this interpretability and the benefits it brings in solving image reconstruction problems.
As a result, I am inclined to recommend a weak reject, but if the concerns above are addressed, I envision my rating could improve upon rebuttal. |
iclr_2020_S1xWh1rYwB | Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work we adapt the information bottleneck concept for attribution. By adding noise to intermediate feature maps we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method's information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision. Deep neural networks have become state of the art in many realworld applications. However, their increasing complexity makes it difficult to explain the model's output. For some applications such as in medical decision making or autonomous driving, model interpretability is an important requirement with legal implications. Attribution methods (Selvaraju et al., 2017; Zeiler & Fergus, 2014;Smilkov et al., 2017) aim to explain the model behaviour by assigning a relevance score to each individual input variable. When applied to images, the relevance scores can be visualised as heatmaps ovä er the input pixel space, thus highlighting salient areas relevant for the network's decision. | Summary
---
(motivation)
Lots of methods produce attribution maps (heat maps, saliency maps, visual explantions) that aim to highlight input regions with respect to a given CNN.
These methods produce scores that highlight regions that are in a vague sense "important."
While that's useful (relative importance is interesting), the scores don't mean anything by themselves.
This paper introduces another new attribution method that measures the amount of information (in bits!) each input region contains, calibrating this score by providing a reference point at 0 bits.
Non-highlighted regions contribute 0 bits of information to the task, so they are clearly irrelevant in the common sense that they have 0 mutual information with the correct output.
(approach - attribution methods)
An information bottleneck is introduced by replacing a layer's (e.g., conv2) output X with a noisy version Z of that output.
In particular, Z is a convex combination of the feature map (e.g., conv2) with Gaussian noise with the same mean and variance as that feature map.
The weights of the combination are found so they minimize the information shared between the input and Z and maxmimize information shared between Z and the task output Y.
These weights are either optimized on
1) a per-image basis (Per-Sample) or
2) predicted by a model trained on the entire dataset (Readout).
(approach - evaluation)
The paper uses 3 metrics with differing degrees of novelty:
1) The bbox metric rewards attribution methods that put a lot of mass in ground truth bounding boxes.
2) The original Sensitivity-n metric from (Ancona et al. 2017) is reported with a version that uses 8x8 occlusions.
3) Least relevant image degredation is compared to most relevant image degredation (e.g., from (Ancona et al. 2017)) to form a new occlusion style metric.
(experiments)
Experiments consider many of the most popular baselines, including Occlusion, Gradients, SmoothGrad, Integrated Gradients, GuidedBP, LRP, Grad-CAM, and Pattern Attribution. They show:
1) Qualitatively, the visualizations highlight only regions that seem relevant.
2) Both Per-Sample and Readout approaches put higher confidence into ground truth bounding boxes than all other baselines.
3) Both Per-Sample and Readout approaches outperform all baselines almost all the time according to the new image degredation metric.
Strengths
---
The idea makes a lot of sense. I think heat maps are often thought of in terms of the colloquial sense of information, so it makes sense to formalize that intuition.
The related work section is very well done. The first paragraph is particularly good because it gives not just a fairly comprehensive view of attribution methods, but also because it efficiently describes how they all work.
The results show that proposed approaches clearly outperform many strong baselines across different metrics most of the time.
Weaknesses
---
* I'm not sure why the new degredation metric is a useful addition. What does it add that MoRF and LeRF don't capture on their own independently?
* I think [1] would be a nice addition to the evaluation section as it tests for something qualitatively different than the various metrics from section 4. It would also be a good addition to the related work.
Missing Details / Points of Confusion
---
* I think there's an extra p(x) in eq. 11 in appendix D.
* I think the variable X is overloaded. In eq. 1 it refers to the input (e.g., the pixels of an image) while in eq. 2 it refers to an intermediate feature map (e.g., conv2) even though it later seems to refer to the input again (e.g., eq. 3). Different notation should be used for intermediate feature maps and inputs.
Presentation Weaknesses
---
* In section 3.1 is lambda meant to be constrained in the range [0, 1]? This is only mentioned later (section 3.2) and should probably be mentioned when lambda is introduced.
* "indicating that all negative evidence was removed." I think this should read "indicating that only negative evidence was removed."
Suggestions
---
"The bottleneck is inserted into an early layer to ensure that the information in the network is still local"
I'd like this to be explored a bit more. Though deeper feature maps are certainly more spatially coarse they still might be somewhat "local". To what degree to they loose localization information? My equally vague alternative intuition goes a bit differently: The amount of relevant information flowing through any spatial location seems like it shouldn't change that much, only the way its represented should change. If the proposed visualizations were the same for every choice of layer then it would confirm this intuition. That would also be an interesting result because most if not all of the cited baseline approaches (where applicable) produce qualitatively different attributions at different layers (e.g., see Grad-CAM).
[1]: Adebayo, Julius et al. “Sanity Checks for Saliency Maps.” NeurIPS (2018).
Preliminary Evaluation
---
Clarity: The paper is clearly written.
Originality: The idea of using the formal notion of information in attribution maps is novel, as is the bbox metric.
Significance: This method could be quite significant. I can see it becoming an important method to compare to.
Quality: The idea is sound and the evaluation is strong.
This is a very nice paper in all the ways listed above and it should be accepted!
Post-rebuttal comments
---
The author responses and other reviews have only increased my confidence that this paper should be accepted. |
iclr_2020_BylPSkHKvB | Generating formal-language represented by relational tuples, such as Lisp programs or mathematical operations, from natural-language input is a challenging task because it requires explicitly capturing discrete symbolic structural information implicit in the input. Most state-of-the-art neural sequence models do not explicitly capture such structural information, limiting their performance on these tasks. In this paper we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural-to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate, in symbolic space, a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments. On two benchmarks, TP-N2F considerably outperforms LSTM-based seq2seq models, creating new state-of-the-art results: the MathQA dataset for math problem solving, and the AlgoLisp dataset for program synthesis. Ablation studies show that improvements can be attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure to support reasoning. | This paper proposes a sequence-to-sequence model for mapping word sequences to relation-argument-tuple sequences. The intermediate representation (output of the encoder) is a fixed-dimensional vector. Both encoder and decoder internally use a tensor product representation. The experimental results suggest that the tensor product representation is helpful for both the encoder and the decoder. The paper is interesting and the experimental results are positive, but in my opinion the exposition could use some substantial work. Fixing the most substantial flaws in the exposition would be sufficient to warrant an accept in my view.
Major comments:
I found the mix of levels of detail in the model specification in section 3 confusing. It would be extremely helpful to have a straightforward high-level mathematical description of the key parts of the encoder, mapping (which could be considered part of the encoder), and decoder in standard matrix-vector notation. While equations (7), (8), (9), (10), (11) and appendix A.2 go some way toward this, key high-level details seem to be missing, and I feel like the exposition would benefit from simply stating the matrix-vector operations that are performed in addition to describing their interpretation in terms of the semantics of the tensor product representation. Specific examples are noted below.
It would be helpful to be explicit about the very highest-level structure of the proposed model. If I understand correctly, it is a probabilistic sequence-to-sequence model mapping a word sequence to a probability distribution over relation-argument-tuple sequences. It uses an encoder-decoder architecture with a fixed-dimensional intermediate representation, and an autoregressive decoder using attention. Both the encoder and decoder are based on the tensor product representation described in section 2. Stating these simple facts explicitly would be extremely helpful.
Especially for the encoder, the learned representation is so general that there seems to be no guarantee that the learned roles and fillers are in any way related to the syntactical / semantic structure that motivates it in section 2. There doesn't seem to be any experimental investigation of the learned TPR in the encoder. If I understand correctly, the way encoder roles and fillers are computed and used is symmetric, meaning that the roles and fillers could be swapped while leaving the overall mapping from word sequences to relation-argument-tuple sequences unchanged. This suggests it is not possible to interpret the role and filler vectors in the encoder in an intuitive way.
Minor comments:
In section 2, "R is invertible" should strictly be "R has a left inverse".
In section 3.1.1, the claim that "we can hypothesize to approximately encode the grammatical role of the token and its lexical semantics" is pretty tenuous, especially given the apparent symmetry between learned roles and fillers in the encoder and given the lack of experimental investigation of the meaning of the learned encoder roles and fillers.
In section 3.1.2, my understanding is that the relation-argument tuple (R, A_1, A_2, A_3), say, is treated as a sequence of 3-tuples: (A_1, R, 1), (A_2, R, 2), (A_3, R, 3). Each of these 3-tuples is then embedded using learned embeddings (separate embeddings for argument, relation and position). If correct, it would be helpful to state this explicitly.
In section 3.1.2, it is stated that contracting a rank-3 tensor with a vector is equivalent to matrix-vector product, which is not the case.
In section 3.1.3, both high-level and low-level details of the MLP module are omitted. High-level, I presume that the matrix output by the encoder is reshaped to a large vector, the MLP is applied to this vector to produce another vector, then this is reshaped to a rank-3 tensor to input to the decoder. It would be helpful to state this. Low-level, the number of layers, depth and activation function of the MLP should be specified somewhere, at least in the appendix.
Did the authors consider using a bidirectional LSTM for the encoder? This might improve performance.
In section 3.1.2 and appendix A.2, why use the LSTM hidden state for subsequent processing rather than the LSTM output (which would be more conventional). The LSTM output is defined in appendix A.2 but appears not to be used for anything. Please clarify in the paper.
Did the authors consider passing the output of the reasoning MLP into every step of the tuple LSTM instead of just using it to initialize the hidden state?
It would be helpful to state the rank of the tensors H, B, etc in section 3.2.2.
In section 3.2.2, what does "are predicted by classifiers over the vectors..." mean? This seems quite imprecise. What is the form of the classifier? My best guess is that the vector a_i^t is passed through a small MLP with a final softmax layer which outputs a probability distribution over the 1-of-K representation of the argument. The main text says "more details are given in the appendix", but appendix A.2 just has "Classifier(a_1^t)". Please clarify in the paper.
What is the attention over in equation (9)? Attention needs at least two arguments, the query and the sequence being attended to. It seems that (9) only specifies one of these. It would also be helpful to be explicit about the form of attention used.
What is f_linear in (11)?
It seems unnecessarily confusing to switch notation for the arguments from A_1 in section 3.1.2 to a r g_1 in section 3.2.2, and similarly for the relations.
For the decoder tuple LSTM, how exactly is the previous relation-argument tuple (R, A_1, A_2, A_3), say, summarized? Are each of R, A_1, A_2 mapped to a vector, these vectors concatenated, then passed into the LSTM? Or is the positional decomposition into (A_1, R, 1), ... used? Please clarify in the paper.
Based on section 3.3, it seems that the model assumes that, in the decomposition of (R, A_1, A_2, A_3) into a sequence (A_1, R, 1), (A_2, R, 2), (A_3, R, 3) of 3-tuples at each decoder output step, the three 3-tuples are conditionally independent of each other and the three entries of each 3-tuple are conditionally independent of each other. Is this indeed assumed? If so, it would be helpful to state this explicitly. It seems like this is likely not true in practice.
Section 3.3 refers to "predicted tokens". Where are these predicted tokens in (9), (10) or (11)?
In section 3.3, it seems the loss at each decoder step is the log probability of the relation-argument tuple at that step. Thus, by the autoregressive property, the overall loss is the log probability of the sequence of relation-argument tuples. If so, it would be helpful to state both these facts explicitly.
Section 3 seems to be missing a section, which is how decoding is performed at inference time. For the output of the decoder at each step, is random sampling used, if so with a temperature, or is greedy decoding (selecting the most likely class, equivalent to a temperature of 0) used? Also, what is done if decoding outputs different R's for (A_1, R, 1), (A_2, R, 2), (A_3, R, 3)? The three R values here should be equal in order for this to represent a relation-argument tuple (R, A_1, A_2, A_3), but there is no guarantee the model will respect this constraint.
Unless I missed it (apologies if so), many experimental architecture details were omitted. For example, how many hidden cells were used for the LSTMs, etc, etc? These should at least be stated in the appendix.
It would be interesting to investigate how long input / output sequences need to be before the fixed-dimensional internal representation breaks down.
In section 4.1.1, it was not clear to me what "noisy examples" means. Does this mean that the dataset itself is flawed, meaning that the reference sequence of operations does not yield the reference answer? Please clarify in the paper.
In table 1, please state the total size of the fixed-dimensional intermediate representation for all systems. This seems crucial to ensure the systems can be meaningfully compared.
In figure 4, left figure, the semantic classes don't apper to be very convincingly clustered. (And it seems like K-means clustering could easily have selected a different clustering given a different random initialization.)
In appendix A.2, mathematical symbols are essentially meaningless without describing what they mean in words. Please explain the meaning of all the symbols that are not defined in terms of other symbols, e.g. w^t, T_{t-1}, ..., f_s m (is this softmax???), f_l i n e a r (what does this mean?), C o n t e x t, C l a s s i f i e r, etc, etc. C o n t e x t in particular doesn't even have a hint of a definition.
In (19) and (27), why would a temperature parameter be helpful? This can be absorbed as an overall multiplicative factor in the weight matrix of the previous linear layer. Is this temperature parameter learned during training (I presume so)? Please clarify in the paper.
Usually * is used for convolution, not simple multiplication (e.g. equation (17)).
Throughout the main body and appendix, there are lots of instances of poor spacing. For example, $f_{linear}$ should be written as something like $f_\text{linear}$ in latex to avoid it coming out as l i n e a r (which literally interpreted means l times i times n times e, etc). Please fix throughout. |
iclr_2020_HkeQ6ANYDB | Rethinking physics in the era of deep learning is an increasingly important topic. This topic is special because, in addition to data, one can leverage a vast library of physical prior models (e.g. kinematics, fluid flow, etc) to perform more robust inference. The nascent sub-field of physics-based learning (PBL) studies this problem of blending neural networks with physical priors. While previous PBL algorithms have been applied successfully to specific tasks, it is hard to generalize existing PBL methods to a wide range of physics-based problems. Such generalization would require an architecture that can adapt to variations in the correctness of the physics, or in the quality of training data. No such architecture exists. In this paper, we aim to generalize PBL, by making a first attempt to bring neural architecture search (NAS) to the realm of PBL. We introduce a new method known as physics-based neural architecture search (PhysicsNAS) that is a top-performer across a diverse range of quality in the physical model and the dataset. | === Overall comments ===
This paper proposes to generalize approaches to physics-based learning (PBL) by performing network architecture search (NAS) over elements from PBL models found in the literature. This entails including physical inputs to the network and the incorporation of new operations to the NAS. I think the idea has merit and rather like it. However, there are several aspects of the work that could be improved. The technical novelty is small, as the extension of the existing NAS models to handle physical inputs and a few new operators is relatively straightforward. The experiments, while well designed, only explore uninteresting toy problems. While I appreciate the necessity to explore the methods performance in a more controlled setting, a more impactful testbed would be more convincing. Another drawback of the evaluation is the lack of a proper statistical analysis of the results, given the small data and model sizes.
=== Relevance & Prior Work ===
+ The related work gives a good summary and categorization of prior work in physics-based learning
+ The problem (physics-based learning) is interesting and relevant to the community
=== Novelty & Approach===
+ application of NAS to physics based learning
+ incorporation of physics solutions as inputs into differentiable NAS
+ creation of physics-informed operation sets to merge physical models into network
- technical steps to merge NAS and PBL are relatively straightforward
=== Evaluation ===
Two representative physical simulations were chosen for evaluation, where elements of the physics model are intentionally omitted, 1) estimating trajectory of a ball in presence of wind and air resistance, and 2) a collision speed simulation where two objects collide, where sliding friction is not accounted for in the physics model.
The baselines consist of: a 3-layer MLP (data-driven), a 3-layer MLP with Physical Regularization, a 3-layer MLP with residual connection to the physics prediction, an MLP with two input branches, on for the data and one for the physics predictions (Physical Fusion), and the Embedded Physics model which estimates parameters for the physics modelu using a 3-layer MLP.
PhysicsNAS can combine elements of the baseline models, but the total number of nodes is limited to 5.
+ Experiments testing the dependence of the model on the numbers of samples and the strength of the physical inconsistencies were conducted. In both cases, PhysicsNAS outperformed the best specialized physics models.
- The chosen testbed tasks are toy problems. While these types of experiments are necessary to understand the performance of the model, it would have been interesting to see PhysicsNAS applied to a more impactful task
- Given the size of the networks and the training data, there is no reason why a more sophisticated statistical analysis of the results wasn’t performed (confidence intervals, t-test, p-value). Similarly, a more complete set of experiments with more sample amounts could be provided with little effort.
=== Clarity & Other Comments ===
- “precious nodes” -> previous nodes |
iclr_2020_BJg15lrKvS | An intriguing phenomenon observed during training neural networks is the spectral bias, where neural networks are biased towards learning less complex functions. The priority of learning functions with low complexity might be at the core of explaining generalization ability of neural network, and certain efforts have been made to provide theoretical explanation for spectral bias. However, there is still no satisfying theoretical result justifying the underlying mechanism of spectral bias. In this paper, we give a comprehensive and rigorous explanation for spectral bias and relate it with the neural tangent kernel function proposed in recent work. We prove that the training process of neural networks can be decomposed along different directions defined by the eigenfunctions of the neural tangent kernel, where each direction has its own convergence rate and the rate is determined by the corresponding eigenvalue. We then provide a case study when the input data is uniformly distributed over the unit sphere, and show that lower degree spherical harmonics are easier to be learned by over-parameterized neural networks. | This paper studies the training of overparametrized neural networks by gradient descent. More precisely, the authors consider the neural tangent regime (NTK regime). That is, the weights are chosen sufficiently large and the neural network is sufficiently overparametrized. It has been observed that in this scenario, the neural network behaves approximately like a linear function of its weights.
In this regime, the authors show that, the directions corresponding to larger eigenvalues of the neural tangent kernel are learned first. As this corresponds to learning lower-degree polynomials first, the authors claim that this explains the "spectral bias" observed in previous papers.
-I think that from a mathematical point of view, the main result of this paper is what one would expect intuitively:
When performing gradient descent with quadratic loss where the function to be learnt is linear, it is common knowledge that convergence is faster on directions corresponding to larger singular values. Since in the NTK regime, the neural network can be approximated by a linear function around the initialization one expects the behavior predicted by the main results. From a theoretical perspective, I see the main contribution of the paper as making this statement precise.
-I am skeptical about some of the implications for practitioners, which are given by the authors:
For example, on p.5 the authors write "Therefore, Theorem 3.2 theoretically explains the empirical observations given in Rahaman et al. (2018), and demonstrates that the difficulty of a function to be learned by neural network should be studied in the eigenspace of neural tangent kernel." To the best of my knowledge, it is unclear whether practitioners train neural networks in the NTK regime (see, e.g., [1]). Moreover, I am wondering whether some of the assumptions of their theorem are really met in practice. For example, the required sample size for higher order polynomials grows exponentially fast with the order and the required step size goes to zero exponentially fast. Does this really correspond to what is observed in practice? (Or is this a mere artifact of training in the NTK regime?) Is this what one observes in the experiments by Ramahan?
I think the paper is not yet ready for being published.
1. There are many typos. Here is an (very incomplete) list.
-p. 2: "Su and Yang (2019)" improves the convergence..."
-p. 2: "This theorem gives finer-grained control on error term's"
-p. 2: "We present a more general results"
-p. 4: "The variance follows the principal..."
-p. 4: "...we will present Mercer decomposition in (the) next section."
2. I think that the presentation can be polished and many statements are somewhat unclear. For example, on p. 7 the authors write "the convergence rates [...] are exactly predicted by our theory in a qualitative sense."
The meaning of this sentence is unclear to me. Does that mean in a quantitative sense? To be honest, only considering Fig. 1 I am not able to assess whether the convergence rates of the different components are truly linear.
I decided for my rating of the paper because of the following reasons:
-I think that for a theory paper the results obtained by the authors are not enough, as they are rather direct consequences of the "near-linearity" of the neural network around the initialization.
-In my view, there is a huge gap between current theoretical results for deep learning and practice. For this reason, it is not problematic for me that it is unclear, what the results in this paper mean for practitioners. (Apart from that, results for the NTK regime are interesting in its own right.) However, in my view, one should explain the limitations of the theory more carefully.
-The presentation of the paper needs to be improved.
References:
[1] A note on lazy training in supervised differentiable programming. L Chizat, F Bach - arXiv preprint arXiv:1812.07956, 2018
-----------------------------
I highly appreciate the authors' detailed response. However, I feel that the paper does not contain enough novelty to justify acceptance.
------
"Equation (8) in Arora et al., (2019b) only provides a bound on the whole residual vector, i.e., , and therefore cannot show different convergence rates along different directions."
When going through Section 4 , I think that it is implicitly stated that one has different convergence along different directions.
-----
For this reason, I am not going to change my score. |
iclr_2020_BJedt6VKPS | In this work, we describe a set of rules for the design and initialization of wellconditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training. We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights. We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule. For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work. | The authors propose a new initialization scheme for training neural networks. The initialization considers fan-in and fan-out, to regularize the range of singular values of the Hessian matrix, under several assumptions.
The proposed approach gives important insights for the problem of weight initialization in neural networks. Overall, the method makes sense. However, I have several concerns:
- The authors do not consider more recent neural network designs such as normalization layers, skip connections, etc. It would be great if the authors could discuss how these layers would change the derivation of the initialization method. Also, preliminary experimental results using these layers are needed. Additionally, to me, normalization layers [Huang et al. Decorrelated Batch Normalization. CVPR 2018] implicitly precondition the Hessian matrix as well. It would be great if the authors also compare their approach to [Huang et al. 2018].
- The authors compared to other initialization schemes such as [He et al., 2015] and [ Glorot and Bengio 2010]. But as the authors mentioned, there are approaches that scales backpropagation gradients also [Martens and Grosse, 2015; Grosse and Martens, 2016; Ba et al., 2017; George et al., 2018]. Since these methods are highly related to the proposed method, it would be great if the authors could show time complexities and performance differences of these methods as well.
- Experiments on the CIFAR-10 dataset with AlexNet seem not exciting: the proposed Preconditioned approach only outperforms the Fan-out approach marginally. I would say that training a [He et al. 2015]-initialized neural network for 500 more iterations than a preconditioned neural network, yields a similar or better loss.
Overall I think the work is very important and interesting. However, it lacks comprehensive comparison and consideration of more recent neural network layers.
Post Rebuttal Comments
I have read all reviewer comments and the author feedback. I appreciate that the authors addressed the skip connections in Appendix.
1. The authors agree that batch norm requires different initialization schemes that are not included in this paper.
2. I agree with the authors that their approach is complementary to the baseline optimization methods; and both approaches can be applied together. However, I still believe that it is informative to compare the two approaches because: (a). Both approaches address the same problem. Since the optimization based approach adds complexity and computational overhead to implementation, it would be great to show if using the proposed approach eliminates the need for the optimization based approach. (b). Is it necessary to use both approaches, or one of them is good enough?
3. I understand that strong experimental evidence is not always required. However, I believe that the new technical insights of the paper alone is not significant enough (part of the reasons in point 1). Thus I was expecting stronger experimental evidences.
Overall I agree with reviewer 1 that the topic is interesting, but in the paper’s current form, it is not ready. I keep my initial rating of weak reject. |
iclr_2020_SJeLopEYDH | Most existing 3D CNN structures for video representation learning are clip-based methods, and do not consider video-level temporal evolution of spatio-temporal features. In this paper, we propose Video-level 4D Convolutional Neural Networks, namely V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, as well as preserving 3D spatio-temporal representations with residual connections. We further introduce the training and inference methods for the proposed V4D. Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin. | [Summary]
The paper presents a video classification framework that employs 4D convolution to capture longer term temporal structure than the popular 3D convolution schemes. This is achieved by treating the compositional space of local 3D video snippets as an individual dimension where an individual convolution is applied. The 4D convolution is integrated in resnet blocks and implemented via first applying 3D convolution to regular spatio-temporal video volumes and then the compositional space convolution, to leverage existing 3D operators. Empirical evaluation on three benchmarks against other baselines suggested the advantage of the proposed method.
[Decision]
Overall, the paper addresses an important problem in computer vision (video action recognition) with an interesting. I found the motivation and solution are reasonable (despite some questions pending more elaboration), and results also look promising, thus give it a weak accept (conditional on the answers though).
[Comments]
At the conceptual level, the idea of jointly modeling local video events is not novel, and can date back to at least ten years ago in the paper “Learning realistic human actions from movies”, where the temporal pyramid matching was combined with the bag-of-visual-words framework to capture long-term temporal structure. The problem with this strategy is that the rigid composition only works for actions that can be split into consecutive temporal parts with prefixed duration and anchor points in time, which is clearly challenged by many works later when more complicated video events are studied. It seems to me that the proposed framework also falls in this category, with a treatment from deep learning. It is definitely worth some discussion on this path.
That said, I would like to see more analysis on the behavior of the proposed method under various interesting cases not tested yet. Despite the claim that the proposed method can capture long-term video patterns, the static compositional nature seems to work best for activities with well-defined local events and clear temporal boundaries. These assumptions hold mostly true for the three datasets used in the experiment, and also are suggested by results in table 2(e), where 3 parts are necessary to achieve optimal results. How does the proposed method perform in more complicated tasks such as
- action detection or localization (e.g., in benchmarks JHMDB or UCF101-24).
- complex video event modeling (e.g., recognizing activities in extended video of TRECVID).
Will it still be more favorable than other concerning baselines?
Besides, on the computation side, it would be complexity, an explicit comparison of complexity makes it easier to evaluate the performance when compared to other state-of-the-art methods.
[Area to improve]
Better literature review to reflect the relevant previous video action recognitions, especially those on video compositional models.
Proof reading - The word in the title should be “Convolutional”, right? |
iclr_2020_Hkg5lAEtvS | While deep learning has shown tremendous success in a wide range of domains, it remains a grand challenge to incorporate physical principles in a systematic manner to the design, training and inference of such models. In this paper, we aim to predict turbulent flow by learning its highly nonlinear dynamics from spatiotemporal velocity fields of large-scale fluid flow simulations of relevance to turbulence modeling and climate modeling. We adopt a hybrid approach by marrying two well-established turbulent flow simulation techniques with deep learning. Specifically, we introduce trainable spectral filters in a coupled model of Reynoldsaveraged Navier-Stokes (RANS) and Large Eddy Simulation (LES), followed by a specialized U-net for prediction. Our approach, which we call Turbulent-Flow Net (TF-Net), is grounded in a principled physics model, yet offers the flexibility of learned representations. We compare our model, TF-Net, with state-of-theart baselines and observe significant reductions in error for predictions 60 frames ahead. Most importantly, our method predicts physical fields that obey desirable physical characteristics, such as conservation of mass, whilst faithfully emulating the turbulent kinetic energy field and spectrum, which are critical for accurate prediction of turbulent flows. | ## Summary
The authors propose use learned spatio-temporal filtering and a convolutional model to predict the behavior of a turbulent fluid flow. Turbulent flow is a very difficult problem, of great interest for engineers and physical scientists, so the topic of the paper is certainly compelling.
This paper has several significant issues. The baselines the authors compare to are quite weak. Many of them were designed for other purposes, such as video prediction. The authors claim significant improvements ("64.2% reduction in divergence") that I believe are calculated in a (unintentionally) misleading way. In fact, I think they missed the most important baseline---the ground truth simulation itself. Unless they can argue that the learned model is superior to the classical simulation in a significant way, it is hard to see the benefit of using something like TF-NET.
## Specific Comments
* Page 2: The claim "64.2% reduction in flow divergence, compared to the best baseline" seems misleading. In Figure 5, the constrained TF-NET is as low as ~590 but the ResNet is between ~610 and ~810. I am guessing that the authors meant a 64.2% reduction in *difference* from the Target model, but this should be clarified.
* Page 4: Is 'T' supposed to appear in the denominator of Equation 3? I would have expected this to be a normalizing factor resulting from integrating the filter G over the whole domain.
* Page 4: I would expect filters used for LES are symmetric. Are any symmetry requirements being enforced on the learned filters?
* Page 4: It's not clear what TF-NET is outputting. I would expect it to output the time derivative of the velocity field, but this should be spelled out explicitly.
* Page 5: Since you are using incompressible Navier-Stokes equations, I don't think the Mach number is relevant. IIUC this is only relevant for the propogation of shock waves in compressible flows.
* Page 7: It looks as if the Target model has significant divergence. Why is this? Should we expect this much divergence in the ground truth data?
* Page 8: In homogeneous isotropic turbulent flow, the energy spectrum is governed by Kolmogorov, so we know what the spectrum should look like. Is there an analytic result for RB convection? If so, could you include that on the plot?
* Page 8: The lines for U-Net and TF-NET are nearly indistinguishable in Figure 7. Could you change them?
* Page 8: It seems like it would be worth including the ResNet in the energy spectrum plots as well.
* What did the learned spatial and temporal filters look like? How do they compare to typical 'hand-chosen' filters?
* IIUC, the training, validation and test data all have identical Rayleigh number. Does the learned model generalize to higher/lower energy? This seems critical to making this sort of model useful.
* The paper doesn't describe a tuning process for any of the models. I would expect this to lead to significant improvement. In particular, how do the models' performance change as the weight on the divergence loss term is increased?
### Baselines
* I think the objective should be to show that TF-NET is superior to the ground-truth method in some way. Can it match the results of the ground truth simulator but do it faster, or with fewer resources?
* The authors say "we compare our model with a series of state-of-the-art baselines for turbulent flow prediction." However, most of these models are intended for video prediction or other tasks unrelated to turbulent flows. I wouldn't expect any of the baselines considered to do perform well in this context.
* Since TF-NET gets the benefit of a loss related to divergence, I would have expected the other architectures to get the same treatment. In fact, without this extra loss term, the ResNet architecture is competitive with TF-NET. |
iclr_2020_S1eRya4KDB | The word embedding models have achieved state-of-the-art results in a variety of natural language processing tasks. Whereas, current word embedding models mainly focus on the rich semantic meanings while are challenged by capturing the sentiment information. For this reason, we propose a novel sentiment word embedding model. In line with the working principle, the parameter estimating method is highlighted. On the task of semantic and sentiment embeddings, the parameters in the proposed model are determined by using both the maximum likelihood estimation and the Bayesian estimation. Experimental results show the proposed model significantly outperforms the baseline methods in sentiment analysis for low-frequency words and sentences. Besides, it is also effective in conventional semantic and sentiment analysis tasks. | This paper proposes a method to learn word embedding by incorporating additional sentiment information. The proposed method extends from D-GloVe by adding the probability of positive sentiment to the loss function. The paper presents three experiments: word similarity, word-level sentiment analysis, and sentence-level sentiment analysis. The experiments show that the method performs comparably with other baseline methods and outperforms in the low-frequency sentence setting (i.e. sentence containing lower frequency words).
I recommend rejecting this paper because (1) the writing is unclear and hard to follow, and (2) the experiment results are not convincing.
From what I can understand in the model part, there are many clarifications needed, not to mention the writing style. I think the re-derivations of GloVe and D-GloVe are not helpful as they cloud the main contribution of the paper. The author should clearly highlight the differences between the main subjects of the experiment: MLESWE and BESWE. In addition, it is not clearly motivated why we need Dirichlet prior for the sentiment variable.
While the claim is to learn better embeddings for rare words, the experiments show that the proposed methods have similar results to the previous work. The only gain we can observe is in the sentence-level experiments in which other factors could affect the performance. Thus, it is hard to draw a supportive conclusion.
Finally, the writing quality must be improved. The paper contains a lot of unrelated and redundant texts (it could be that I could follow the paper).
1. I do not think eq 2 is a representative of how the paper train the model, nor attempt to compare with.
2. As mention earlier, in section 3.2 and 3.3, the re-derivation is not particularly helpful. I think the paper should put more emphasis on the novelty of the work.
3. Plots in the experiment results are illegible. Tables should be more suitable for Figures 1, 2, and 3.
I urged the authors to revise this paper and make sure it follows the formatting guideline, especially the citations. Finally, I'd recommend the authors have a professional writer (English) review the paper before submission. |
iclr_2020_rJleKgrKwS | Rules over a knowledge graph (KG) capture interpretable patterns in data and can be used for KG cleaning and completion. Inspired by the TensorLog differentiable logic framework, which compiles rule inference into a sequence of differentiable operations, recently a method called Neural LP has been proposed for learning the parameters as well as the structure of rules. However, it is limited with respect to the treatment of numerical features like age, weight or scientific measurements. We address this limitation by extending Neural LP to learn rules with numerical values, e.g., "People younger than 18 typically live with their parents". We demonstrate how dynamic programming and cumulative sum operations can be exploited to ensure efficiency of such extension. Our novel approach allows us to extract more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by our experiments on synthetic and real-world datasets. | This paper proposes an extension of NeuralLP that is able to learn a very restricted (in terms of expressiveness) set of logic rules involving numeric properties. The basic idea behind NeuralLP is quite simple: traversing relationships in a knowledge graph can be done by multiplicating adjacency matrices, and which rules hold and which ones don't can be discovered by learning an attention distribution over rules from data.
The idea is quite clever: relationships between numeric data properties of entities, such as age and heigh, can also be linked by relationships such as \leq and \geq, and those relations can be treated in the same way as standard knowledge graph relationship by the NeuralLP framework.
A major drawback in applying this idea is that the corresponding relational matrix is expensive to both materialise, and use within the NeuralLP framework (where matrices are mostly sparse). To this end, authors make this process tractable by using dynamic programming and by defining such a matrix as a dynamic computation graph by means of the cumsum operator. Furthermore, authors also introduce negated operators, also by defining the corresponding adjacency matrices by means of computation graphs.
Authors evaluate on several datasets - two real world and two synthetic - often showing more accurate results than the considered baselines.
One thing that puts me off is that, in Table 2, AnyBurl (the single one baseline authors considered other than the original NeuralLP) yields better Hits@10 values than Neural-LP-N, but the corresponding bold in the results is conveniently omitted.
Another concern I have is that the expressiveness of the learned rules can be somehow limited, but this paper seems like a good star towards learning interpretable rules involving multiple modalities.
Missing references - authors may want to consider citing https://arxiv.org/abs/1906.06187 as well in Sec. 2 - it seems very related to this work. |
iclr_2020_HkxeThNFPH | We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e., policies that keep the agent in desirable situations, both during training and at convergence. We formulate these problems as constrained Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them. Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction. | Summary:
Authors propose ideas to perform safe RL in continuous actions domain with modifications to Policy Gradient (PG) algorithms via either constraining the policy parameters or constraining the actions selected by PG with a surrogate/augmented state dependent objective. The paper is well motivated and the experiments (although I have some reservations about the setup) demonstrate efficacy of the proposed method.
Review:
--> Introduction
I do not agree with the statement that value function based algorithms are restricted to discrete action domains, especially when you rely on “ignoring function approximation errors” for some of your claims.
Again in switching from value function to PG is true for traditional RL/Control theory but this is not valid here. ( your methods rely on Q(s, a) which is action-value function, or the constraint in equation 3 is integral of Q over all actions which would be value-function in traditional definition )
Note: this is explained very well towards the end in the Appendix B, but this is a review of the paper and not Appendix B or C.
--> Section 2
section 2.3 I would strongly advise the authors to rewrite this, this section reads like it was copied as is from the reference [Chow et al 2018]. especially the way Lyapunov function is defined. And the language and arguments are almost same. Some sentences cite the reference but conclusions drawn on these are not cited, are you claiming that these conclusions are original from this paper ?
It is not clear to me how the feasibility of initial pi_0 is ensured ? Did I miss this somewhere ?
→ Section 3
Section 3 is pleasant to read and very easy to understand, however, same cannot be said of the
section 3.1. I had to spend significant time reading 3.1 and I am still not sure I have understood it very well.
Experiments:
I don’t think halfCheetah-Safe is actually actually an useful experiment, Limiting the joint torques is perfectly understandable, just limiting speed and getting smooth motion could just be an artifact of the simulation environment. Are both constraints applied simultaneously (torque and speed) ? It is unclear from the text.
I am not sure CPO without linesearch is actually a fair comparison. Line search may actually deem of the actions unsafe and therefore I would presume original CPO do be less prone to constraint violation than the proposed modification in your experiments. Again PPO is more heuristic than TRPO which makes it hard to compare like for like. PPO might give higher rewards but constraint violations may increase as well. An important point for Safe-RL I feel.
Figure 6
Can you be more specific as what the figure 6 is showing ? Constraint ? is this constraint violation count? or cumulative sum of constraint slack over the whole trajectory ?
Not part of assessment :
Unclear Statements:
Page 7, DDPG Vs PPO: explain clearly what you mean by “covariate shift” or remove the statement altogether.
Page 7, section 5 second paragraph, “The actions are the linear …. center of mass” I couldn’t understand this ? What do you mean by actions are velocity ?
Minor points (Language, Typos):
page 3, last paragraph, Chow et al. is repeated, I can see why this happens there but suggest editing to avoid this. [This is also in intro paragraph, there it is just a typo and should be rectified]
Figure 6: Captions labels are incorrect. |
iclr_2020_BklC2RNKDS | Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to misclassification under perturbations of the input features. In this context, it has also been observed that folding the verification procedure into training makes it easier to train verifiably robust models. In this paper, we extend the applicability of verified training by extending it to (1) recurrent neural network architectures and (2) complex specifications that go beyond simple adversarial robustness, particularly specifications that capture temporal properties like requiring that a robot periodically visits a charging station or that a language model always produces sentences of bounded length. Experiments show that while models trained using standard training often violate desired specifications, our verified training method produces models that both perform well (in terms of test error or reward) and can be shown to be provably consistent with specifications. | This paper extends bound propagation based robust training method to complicated settings where temporal specifications are given. Previous works mainly focus on using bound propagation for robust classification only. The authors first extend bound propagation to more complex networks with gates and softmax, and designed a loss function that replies on a lower bound for quantitative semantics of specifications over an input set S. The proposed framework is demonstrated on three tasks: Multi-MNIST captioning, vacuum cleaning robot agent and language generation. The authors formulate specifications using temporal logic, train models with bound propagation to enforce these specifications, and verify them after training.
My questions regarding this paper are mostly on the three demonstrated tasks:
1. For the Multi-MNIST dataset, I have the following questions:
1(a). In Table 2, it is surprising that even with perturbation of 0.5, the verified and adversarial accuracy is very high. At perturbation epsilon=0.5, it should be possible to perturb the entire image to gray (value 0.5), so I believe the accuracy should be very low here. It is hard to believe under this setting the verified accuracy is still 99%.
1(b). For Table 3, nominal accuracy should also be reported.
1(c). Additionally, how do you define the nominal accuracy here? An example is nominally correct when all digits are predicted correctly in the sequence, or just when the sequence length is predicted correctly?
1(d). For the termination accuracy, do we only care about the sequence length being predicted correctly, or does it also cover the case that all digits in the sequence are predicted correctly? If it is only concerning about the sequence length, this property is a little bit weak.
2. For the RL Robot agent experiment, I have the following questions:
2(a). Because T=10, are you saying that we can only guarantee that the battery is always recharged for any rollouts less than 10 steps? After 10 steps beyond the initial position, can we get any guarantees? A 10-step only guarantee seems too restrictive.
2(b). Are all the properties only verified assuming that the agent starts from the center? I think this assumption is probably also too strong in practice.
2(c). Since all the %verified cells reported in Table 4 are all 100%, it is probably better to make the problem more challenging, by increasing T or considering different initial positions. It is important to show when the performance of the proposed method starts to degrade, to understand the power of the proposed method.
3. For the language generation experiment, the perplexity of the verified training model looks significantly worse than nominal or sampled models. With a perplexity as high as this, I believe the model actually produces garbage. Can you provide some examples of generated texts? I feel language generation is probably not a suitable task for the proposed training method.
Other minor issues:
1. Table 1 should have some horizontal lines - it is hard to align the works with categories on the right.
2. Several papers appear multiple times in references, including "Differentiable abstract interpretation for provably robust neural networks", "Towards fast computation of certified robustness for relu networks" (and probably others). Also on page 2, Shiqi et al., should be Wang et al. (Shiqi is the first name).
3. I feel the writing is a bit rushed and the authors should make a few more passes on the paper.
This paper makes valid technical contributions, especially the conversion from STL specifications to lower bounds of the quantitative semantics is interesting. Although bound propagation based robust training method is simple to extend to softmax/GRU with interval analysis, applying robust training techniques to the three interesting applications are good contributions. Since the main contribution of this paper is the empirical results on the three tasks, my concerns regarding the experiments need to be addressed before I can vote for accepting this paper. Also, because this paper uses 10 pages, I am expecting the paper to meet a higher standard. Thus, I am voting for a weak reject at this time.
****** After author response
The author response addressed some of my concerns. However I do believe this paper is relatively weak in contribution, especially the experiments can be done more thoroughly. I also appreciate that the authors reduced paper length to 8 pages. I am okay with accepting this paper as it does have some interesting bits, but it is clearly on the borderline and can be further improved. |
iclr_2020_rJe9fTNtPS | Minwise Hashing (MinHash) is a fundamental method to compute set similarities and compact high-dimensional data for efficient learning and searching. The bottleneck of MinHash is computing k (usually hundreds) MinHash values. One Permutation Hashing (OPH) only requires one permutation (hash function) to get k MinHash values by dividing elements into k bins. One drawback of OPH is that the load of the bins (the number of elements in a bin) could be unbalanced, which leads to the existence of empty bins and false similarity computation. Several strategies for densification, that is, filling empty bins, have been proposed. However, the densification is just a remedial strategy and cannot eliminate the error incurred by the unbalanced load. Unlike the densification to fill the empty bins after they undesirably occur, our design goal is to balance the load so as to reduce the empty bins in advance. In this paper, we propose a load-balanced hashing, Amortization Hashing (AHash), which can generate as few empty bins as possible. Therefore, AHash is more load-balanced and accurate without hurting runtime efficiency compared with OPH and densification strategies. Our experiments on real datasets validate the claim. All source codes and datasets have been released on GitHub anonymously 1 . | *** Summary ***
MinHash is a well-known method for approximating set similarities in terms of the jaccard similarity. The idea is to use k random permutations (hashes) on all elements of the sets and check how often two sets hash into the same bucket. Larger values of k yield more accurate estimates of the set similarities but require more time to compute. One permutation hashing (OPH) aims to reduce the number of hash computations per element to 1 and maintaining bins. However, some of those bins may remain empty which negatively influences the similarity estimate. Optimal OPH (OOPH) hashes empty bins to non-empty bins and effectively reuses bins. This paper proposes Amortization Hashing (AHash) which reduces the occurrence of empty bins and thus leads to better similarity estimates.
*** Evaluation ***
The paper proposes an interesting idea for approximating set similarities much faster than MinHash. However, I have some issues with the submission.
I believe that the manuscript has a limited impact. The approach performs on par or marginally better as OOPH within the first and only reasonable experiment (4.2). As the authors state themselves on the page break 9/10, the advantage of AHash vanishes for small and large values of k. Hence, AHash only benefits of moderate choices of k. Moreover, I can see that OOPH might have a minor problem of estimating the set similarities which AHash aims to fix, but why should it outperform MinHash in terms of accuracy? Why are the pairs of set only chosen from RCV1? Why those particular set sizes? Why does no plot show standard deviation/error?
The remaining experiments yield very limited insight. Considering a linear SVM on standard datasets where the test error is >99.8% seems to be obsolete. In addition, the most important parameter k is held fix to an arbitrary value. Same holds for b. Since AHash only benefits from moderate sizes of k, why was k chosen in favor of AHash? The performance should definitely be shown in dependence of k. Instead, the most unimportant parameter (C) is varied. This should have been done in a proper cross-validation. Similar arguments hold for the near neighborhood search. What is the query set being used?
There are more flaws within the manuscript. The mathematical presentation is rather poor. The theorems lack text and assumptions and solely consist of equations. The corresponding proofs are also short on text and hard to follow. Unfortunately, there is no analysis of the expected error as a function of k. The proof of Theorem 3 is almost two pages and should be moved to the supplementary material since it does not provide much insight; it just distracts the reading flow. In addition, every equation is numbered but none is ever referenced. The citation style (numbers in round brackets) is really uncommon and can be easily confused with equation numbers. Most importantly, I want to note that a different font was used and that the spacing was clearly tricked in several places (e.g. within Section 4). This makes it especially hard to judge whether the manuscript has the correct length.
*** Further Comments ***
- The font was changed. It does not match the font of the other submissions.
- The spacing is tricked in several places, especially in Section 4.
- Links [1,29] should not be references but footnotes.
- Citations should never be in round brackets like (1), because they can be confused with equation numbers. Instead they should be in square brackets like [1] or, more preferably, the natbib package should be used as in the ICLR style guidelines.
- What does OOPH stand for? It is never stated.
- Every equation has a number, but none is ever referenced.
- Math/Equations are part of the text and should be treated as such, i.e., there should be proper punctuation marks.
- What is a 2-universal hashing?
- Algorithm 1: "output range" sounds like an interval whereas the number of distinct hash values is meant.
- "(14) proposed", no past tense
- Instead of "(11) proposes", please use "Shrivastava and Li [11] propose"
- Why "Theorem 1" and "Proof 3.1"?
- Why are the theorems lacking the assumptions and text? They basically consist of equations.
- Theorem 3 should have a "less or equal" instead of a "strictly less".
- Eq. (40): "0andm"
- Why does Proof 3.3 have a end of proof sign (not right-aligned) but the other proofs don't?
- None of the experimental results shows standard deviations/errors although the experiments are repeated several times. Why? It would be also nice to see whether the approximation tends to over- or underestimate J. This could be done with a violin plot.
- How are the pairs of sets in Section 4.2 chosen and why only from RCV1? This seems to be the most important experiment.
- Why is k (and b) fixed to an arbitrary value in the remaining experiments? Please select C within a proper cross-validation.
- There are a lot of enumerations which unnecessarily make the manuscript longer, e.g. in Sections 1.4, 2.1 and 4.1. In addition, the proof of Theorem 3 almost takes two pages but is not super informative. It should be moved to the supplementary material. This in combination with the font mismatch makes it difficult to determine the real length of the submission.
- It is nice that the source code is published online, but uncommented c++ code is not really helpful. |
iclr_2020_HJxN0CNFPB | Polynomial neural networks are essentially polynomial functions. These networks are shown to have nice theoretical properties by previous analysis, but they are hard to train when their polynomial orders are high. In this work, we devise a new type of activations and then create the Ladder Polynomial Neural Network (LPNN). The LPNN has a feedforward structure. It provides good control of its polynomial order because its order increases by 1 with each of its hidden layers. The new network can be treated with deep learning techniques such as batch normalization and dropout. As a result, it can be well trained with generic optimization algorithms regardless of its depth. In our empirical study, deep LPNN models achieve good performances in a series of regression and classification tasks. | This paper proposes Ladder Polynomial Neural Networks (LPNNs) that use a new type of activation primitive -- a product activation -- in a feed-forward architecture. Unlike other polynomial architectures that grow in the order exponentially with network depth, the proposed approach gives explicit control over the order and smoothness of the network output and enables training with standard techniques.
The proposed architecture is closely related to a decomposition of a k’th order multivariate polynomial function
[T, x^{\otimes k}] = \lambda^\top (A x \odot A x \odot … \odot A x) = \lambda^\top (A x)^{\odot k}
where T is a symmetric tensor of polynomial coefficients and [\cdot,\cdot] denotes contraction. This is a shallow (one layer architecture) and sometimes referred as a Waring decomposition.
In this paper, the authors propose a specific chain factorization of the polynomial (Eq 5 in the paper), where they write the factors recursively, that they name as a ladder polynomial neural network.
h^\ell = (W_\ell h^{\ell-1} \odot V^{\ell} x)
The ladder architecture is very closely related to tensor trains (https://epubs.siam.org/doi/10.1137/090752286). I found it surprising and somewhat alarming that this literature is not being cited as these methods are also quite well known in deep learning.
I like the smoothness analysis of section 3.1 -- the proof is quite easy to follow and direct. I would be quite surprised if this result would not be known in the literature in some other form but I don’t recall seeing it. On the other hand it seems to be inevitably very loose for a deep ladder network unless the network models the zero function. It would have been a valuable addition to the experimental section, if this bound would have been illustrated numerically on synthetic examples.
In 3.2, The authors say that the objective is multiconvex -- I would argue that it is multilinear (apart from the regularization term, that is later introduces). The observation in 3.3, that batch-normalization or dropout can be used for this model is perhaps tangential to the main argument. These is investigated in the experimental section but I don’t see a clear conclusion. The section in 3.4 must include links to tensor decompositions beyond factorization machines.
Overall, I think the paper has some merit and could be interesting for some readers, despite the fact that the contribution is not very original and the treatment could be improved in many ways. |
iclr_2020_BJxyzxrYPH | We address the problem of reconstructing a matrix from a subset of its entries. Current methods, branded as geometric matrix completion, augment classical rank regularization techniques by incorporating geometric information into the solution. This information is usually provided as graphs encoding relations between rows/columns. In this work we propose a simple spectral approach for solving the matrix completion problem, via the framework of functional maps. We introduce the zoomout loss, a multiresolution spectral geometric loss inspired by recent advances in shape correspondence, whose minimization leads to state-of-the-art results on various recommender systems datasets. Surprisingly, for some datasets we were able to achieve comparable results even without incorporating geometric information. This puts into question both the quality of such information and current methods' ability to use it in a meaningful and efficient way. | This paper proposes a new method for geometric matrix completion based on functional maps. The proposed algorithm is a simple shallow and fully linear network. Experimental results demonstrate the effectiveness of the proposed method.
The proposed method is new and has been shown good empirical results. The paper also points out a new way to interpret matrix completion. On the other hand, the proposed method seems ad hoc and there is no clear evidence why it is better than other baselines except the empirical results. The paper also has some clearance issues, making it hard to understand. I vote for a weak reject of the paper at the current pace and would like to increase my score if the following questions can be clearly answered.
1. Why do we need to propose the algorithm? Is it because we have the functional maps technique motivated from shape correspondence, and we can see some connection of such technique with matric completion? If it is true, we surely can have a new algorithm based on such a new technique. But I can still not understand why the method work, at least, in an intuitive way.
2. What is the sample complexity of the proposed matrix completion algorithm?
The introduction of the paper is poorly written. The first paragraph and the third one both contain some introduction to matric completion, which results in a lot of redundant information. The second paragraph and the fourth one are redundant in the same way since they both focus on geometric matrix completion. I think besides introducing what is matrix completion and what is geometric completion, the introduction part should focus more on the motivation to propose the algorithm. However, I can only see from the end of the second paragraph (some simple models need to be proposed) and the fifth paragraph (“The inspiration of our paper”) some motivation information. The introduction part needs to be re-organized to provide more useful information about the paper rather than a literature review.
There is some unclear/inaccurate/subjective statement in the introduction part. For example, “Self-supervised learning” needs a reference. Why geometric matrix completion generalizes the standard deep learning approaches is not clear. What does it mean by “their design is … cumbersome and non-intuitive”? The shape correspondence is never explained until very later in the paper. Also, there are some unclear issues besides the Introduction part. For example, what does it mean by “the product graph”? All these issues need to be clarified before the paper can be accepted.
---------------------------------------------------
Thank you for the detailed rebuttal. For Q1, it clearly explains how does the method work. However, it is still not clear why does the method work. I also have another concern after reading the rebuttal, if the shape correspondence is not that important, why make it an important motivation in the paper? For Q2, it is interesting to see some theoretical results on the sample complexity, rather than an experimental one. The paper would also be much better if the clearance issues can be addressed. Even if I would not vote for an accept this time, I am looking forward to a revised version in the future. |
iclr_2020_BkgqExrYvS | The population model is a standard way to represent large-scale decentralized distributed systems, in which agents with limited computational power interact in randomly chosen pairs, in order to collectively solve global computational tasks. In contrast with synchronous gossip models, nodes are anonymous, lack a common notion of time, and have no control over their scheduling. In this paper, we examine whether large-scale distributed optimization can be performed in this extremely restrictive setting. We introduce and analyze a natural decentralized variant of stochastic gradient descent (SGD), called PopSGD, in which every node maintains a local parameter, and is able to compute stochastic gradients with respect to this parameter. Every pair-wise node interaction performs a stochastic gradient step at each agent, followed by averaging of the two models. We prove that, under standard assumptions, SGD can converge even in this extremely loose, decentralized setting, for both convex and non-convex objectives. Moreover, surprisingly, in the former case, the algorithm can achieve linear speedup in the number of nodes n. Our analysis leverages a new technical connection between decentralized SGD and randomized load-balancing, which enables us to tightly bound the concentration of node parameters. We validate our analysis through experiments, showing that PopSGD can achieve convergence and speedup for large-scale distributed learning tasks in a supercomputing environment. | This paper proposes to use population algorithms as a mechanism for implementing distributed training of deep neural networks. The paper makes some claims about the relationship to previous work on (asynchronous) gossip algorithms that appear to be incorrect. In fact, the proposed PopSGD algorithm is very closely related to other methods in the literature, including AD-PSGD (Lian et al. 2017b) and SGP (Assran et al. 2018). I recommend it be rejected due to lack of novelty and missing connections to much related work.
The introduction (page 3) mentions that the "matrix characterization is not possible in the population model." Here the "matrix characterization" refers to the typical approach in which gossip algorithms (synchronous or asynchronous) are formulated and analyzed. I'd appreciate if the authors could elaborate on this claim. In the study of gossip algorithms, the organization of time into "global rounds" is purely for the sake of analysis; a global, synchronized clock is not required to implement these methods. In fact, the description of the setup appears to be very similar to the asynchronous time model described used to analyze "randomized gossip algorithms" (see the well-cited paper by Boyd, Ghosh, Prabhakar, and Shah). In the PopSGD case, the choice is simply to allow the complete graph (i.e., any agent can interact with any other agent) rather than restricting interactions of a given agent to be among a subset of the other agents (i.e., its neighbors).
Let me elaborate on the ways in which PopSGD is similar to AD-PSGD and SGP. PopSGD involves interactions between randomly drawn pairs of agents. The AD-PSGD algorithm of Lian et al. (2017b) also performs updates between pairs of agents drawn randomly at every step. The definition of the PopSGD interaction in (1.1) (or equivalently Alg 1) implies that when agents i and j interact, neither i nor j can interact with another agent until the current interaction completes. The main difference appears to be that in Lian et al. (2017b) agents are organized into a bipartite graph where $n/2$ nodes are "active" and initiate interactions with one of the other $n/2$ "passive" nodes (drawn randomly). This is done for practical reasons - to avoid deadlocks.
I also believe that PopSGD can be viewed as a particular instance of the overlap-SGP algorithm proposed in Assran et al. (2018). Overlap-SGP, the way it is described, makes use of one-directional interactions (agent i may receive and incorporate information from agent j without the reverse happening simultaneously). This was also introduced for practical reasons. It is possible for multiple interactions to happen simultaneously, and the pattern of iteractions may vary over time. There is nothing in the analysis, however, that prevents one from restricting to symmetric interactions, in which case one could recover the symmetric updates of PopSGD. To compensate for one-directional interactions, Overlap-SGP tracks an additional variable (the weight, or denominator). However, in the case where interactions are always symmetric as in PopSGD, the corresponding update matrices will always be doubly-stochastic, and in this case the weights are always equal to 1. Thus PopSGD really is identical to Overlap-SGP in this special restricted case where interactions are always pair-wise and symmetric. Moreover, Assran et al. (2018) prove that Overlap-SGP achieves a linear speedup in the smooth non-convex setting.
The experiments don't provide any comparison with other related methods, and the discussion in the introduction isn't sufficient to convince me that there are significant differences between these methods. In the experiments, I also wanted to ask about the mult constant. If it is really possible to achieve linear scaling, wouldn't one hope to be able to get away with mult=1?
The decreasing learning rate schedule used in the description and analysis of PopSGD seems very restrictive. Specifically, in the training of deep neural networks it is common to use much different learning rate schedules. Is it fundamentally not possible to do so with PopSGD-type models, or is it just a limitation of the current analysis approach (specifically for convex functions)? What learning rate scheme was used in the experiments?
Finally, the introduction (p3) emphasizes that it is the population gradient perspective, and the connection to load-balancing processes, which enable one to achieve linear scaling. I disagree with this statement. While I do agree that convexity alone is not sufficient, the key assumption made here (as well as in other work, such as that of Lian et al.), is that all agents draw gradients from the same distribution; i.e., that all agents have access to independent and identically distributed stochastic gradient oracles. In fact, this is stronger than the assumptions made in Lian et al. (2017a and 2017b), and Assran et al. (2018), where it is only assumed that the gradient oracles at each agent are similar, but not necessarily identical. |
iclr_2020_S1l6ITVKPS | With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data. In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity. We show that the proposed architecture, when pre-trained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures. The workings of a successfully trained model are visualised to shed some light on how the architecture functions. | This paper presents PrediNet: an architecture explicitly designed to extract representations in the form of three-place predicates (relations). They evaluate the architecture on a visual relational task called the "Relations Game" which involves comparing Tetris-like shapes according to their appearance, relative positions, etc.. They show that their architecture leads to useful generalizable representations in the sense that they can be used for new tasks without retraining.
I think this paper contains a number of unusual and interesting ideas but is let down by its presentation. The writing is good, but provides very little intuition for why we should expect this approach to work (aside from its connection to equation 1) - I discuss this in more depth below. The experimental task is interesting (I'm okay with synthetic tasks of this form for unusual new architectures like this), but I'm not sure what it tests that isn't tested in the CLEVR and sort-of -CLEVR datasets which rely on similar relational reasoning to solve. The advantage of those datasets is they are well-established with strong baselines so we can be more certain that a fair comparison is been made. I've voted to reject this paper because I feel its premature in its current form.
Expanding on this - The description of PrediNet covers the basics, but missing detail and intuition for why certain choices are made. For example,
- why is L flattened for the queries (I think it’s because the query is independent of pixel location, but flattening seems of when L also includes co-ordinates) but not the key, K?
- Why is the key space shared between heads (this seems more intuitive - keys should have consistent meaning… but if that’s the case, make that intention explicit)?
- Also, writing the dimensionality of the matrices, would help (e.g. is W_S in R^{m x j} or R^{m x 1} or something else?).
- What is the meaning of the position features in E_1 and E_2? From the softmax product it seems they should be a weighted sum of the pixels that are addressed - implying that it is the weighted average location?
- The final representation mostly consists of an h x (j) vector (ignoring positions) containing the output of the comparators. Could you provide some intuition for why we would expect such a representation to be useful for the downstream task? This representation seems to differ substantially from what is used in the baseline methods: i.e. attention-weighted sums of the input features. |
iclr_2020_SJgIPJBFvH | Generalization of deep networks has been of great interest in recent years, resulting in a number of theoretically and empirically motivated complexity measures. However, most papers proposing such measures only study a small set of models, leaving open the question of whether the conclusion drawn from those experiments would remain valid in other settings. We present the first large scale study of generalization bounds and measures in deep networks. We train over two thousand convolutional networks with systematic changes in commonly used hyperparameters. Hoping to uncover potentially causal relationships between each measure and generalization, we analyze carefully controlled experiments and show surprising failures of some measures as well as promising measures for further research. | The paper aims at providing a better understanding of generalization for Deep Learning models. The idea is really interesting for the ML community as, despite their broad use, the astounding property of deep neural networks to generalize that well is still not well understood.
The idea is not to show new theoretical bounds for generalization gaps but stress the results of an empirical study comparing the already existing measures. The authors choose 7 common hyperparameters related to optimization and analyze the correlation between the generalization gaps effectively observed and the ones predicted by different measures (VC dim, cross-entropy, canonical ordering …).
The writing of the paper is clear and easily understandable. Besides, I believe that the study is relevant to ICLR conference.
However, I believe the level of the paper is marginally below the threshold of acceptance and therefore would recommend to reject it.
The paper is solely empirical but I believe that the empirical section is a bit weak, or at least some important points remain unclear. If I appreciate the extent efforts made in trying to evaluate different measures of generalization gaps, I do not believe that the findings are conclusive enough.
1) First, all this empirical result are based on one-dataset (CIFAR-10) only thus limiting the impact of the study. Indeed, a given measure might very correlate with generalization gap on this specific dataset but not on others.
2) Specifically, we see that on this specific dataset, all training accuracies are already quite good (cf : Figure 1, distribution of the training losses). Consequently, authors are more correlating the chosen measures with the test error rather than with the generalization gaps. On other more complicated datasets where the training loss is higher, the VC dimension might consequently have way better results.
Similarly, in Section 6, the authors say that the results «confirm the widely known empirical observation that over-parametrization improves generalization in deep learning. » In this specific case, no reference was given to support the claim. I would agree with the claim « over-paramatrization improves test accuracy (reduces test error) » but the link between over-parametrization and generalization is less clear.
3) In Section 4, the authors say « drawing conclusion from changing one or two hyper-parameters » can be a pitfall as « the hyper-parameter could be the true cause of both change in the measure and change in the generalization ». I totally agree with the authors here. Consequently, I do not understand why the correlations were measured by only changing one hyper-parameter at a time instead of sampling randomly in Theta.
4) It is still not clear to me how the authors explain why some measures are more correlated with generalization gaps than others. Are some bounds tighter than others ? This empirical study was only applied to convolutional neural networks and consequently one may wonder that for example the VC dim bounds computed in the specific case of neural networks are too loose. However, this measure could be efficient for type of models.
I would like the authors to clear the following points :
- How do you ensure that the empirical study clearly correlated measures predictions with generalization gaps and not simply with test errors (or accuracies) ? (point 2)
- Could you please also answer Point 4 ?
- Finally, how would you explain the fact that the canonical order performs so well compare to many other measures and that it is a really tough-to-beat baseline ? |
iclr_2020_BJlBSkHtDS | The performance of deep network learning strongly depends on the choice of the non-linear activation function associated with each neuron. However, deciding on the best activation is non-trivial, and the choice depends on the architecture, hyper-parameters, and even on the dataset. Typically these activations are fixed by hand before training. Here, we demonstrate how to eliminate the reliance on first picking fixed activation functions by using flexible parametric rational functions instead. The resulting Padé Activation Units (PAUs) can both approximate common activation functions and also learn new ones while providing compact representations. Our empirical evidence shows that end-to-end learning deep networks with PAUs can increase the predictive performance. Moreover, PAUs pave the way to approximations with provable robustness. | This paper introduces a novel parametric activation function, called the Pade Activation Unit (PAU), for use in general deep neural networks. Pade is a rational function, which is a ratio of two polynomials, and which can very well approximate any of the usually used activation functions while having only a few parameters that can be learned from data. Moreover, the authors identify five properties that an activation function should have, and either prove or empirically show that PAUs satisfy all of them, unlike some of the baselines. Additionally, since Pade approximation can have poles and be unstable, this work introduces safe PAUs, where the polynomial in the denominator is constrained to attain values greater than or equal to one. Since one of the suggested properties is that a function using a given activation function be a universal function approximator, the authors provide a sketch of a proof that PAUs do allow that. This proof applies only to the unsafe version of the PAU, and it is unclear whether it extends to the safe PAU---an issue that is not mentioned by the authors.
Furthermore, the authors propose a stochastic version of PAU with noise injected into parameters, which allows regularization. The empirical evaluation is quite extensive, and the PAU is compared against nine baselines on five different architectures (LeNet, VGG, DenseNet, ResNet, MobileNet) on four different datasets (MNIST, Fashion MNIST, CIfar10, ImageNet) for the classification task. The evaluation confirms that PAUs can match the performance of or sometimes outperform even the best baselines while the attained learning curves show that PAUs also lead to faster convergence of trained models. Finally, the authors demonstrate that (and provide intuition why) using PAUs allow for high-performing pruned models.
I recommend ACCEPTing this paper as it is well written, extensively evaluated, and provides performance improvements or at least matches the performance of the best baseline across several datasets and model architectures.
My only two suggestions for improvement are a) make the universal approximation proof tighter by making sure that it extends to the safe PAU version, and b) evaluate the proposed activation function on tasks other than just classification. |
iclr_2020_r1gixp4FPH | Nesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD. This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent. To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD. The resulting algorithm, which we call MaSS, converges for same step sizes as SGD. We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov's method. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, SGD+Nesterov and Adam. | The authors present a new first order optimization method that adds a corrective term to Nesterov SGD. They demonstrate that this adjustment is necessary and sufficient to benefit from the faster convergence of Nesterov gradient descent in the stochastic case. In the full-batch (non deterministic) setting, their algorithm boils down to the classical formulation of Nesterov GD. Their approach is justified by a well conducted theoretical analysis and some empirical work on toy datasets.
Positive points:
- The approach is elegant and thoroughly justified. The convergence to Nesterov GD when the batch size increase is comforting.
- The empirical evaluation, even if it is still preliminary and larger scale experiments will have to be conducted before the method could be widely adopted, are suitable and convincing.
- Some interesting observations regarding the convergence regimes (in respect to the batch size) are made. It would have been interesting to see how the results from fig3 generalize to the non convex problems considered in the paper.
Possible improvements:
- In H2, it is mentioned that the algorithm is restarted (the momentum is reset) when the learning rate is annealed. Was this also done for SGD+nesterov? Also, I think it is an important implementation detail that should be mentioned outside of the appendix
- Adam didn’t get the same hyper-parameter tuning as MaSS did. It is a bit disappointing, as I think the superior performance (in generalization) of non-adaptive methods would still hold and the experiment would have been more convincing. Rate of convergence is also not reported for Adam in fig 5.
I think this is definitely a good paper that should be accepted. I’m looking forward to see how it performs on non-toy models and if the community adopt it. |
Subsets and Splits